Nov 28 23:34:49 np0005539482 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 28 23:34:49 np0005539482 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 28 23:34:49 np0005539482 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 28 23:34:49 np0005539482 kernel: BIOS-provided physical RAM map:
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 28 23:34:49 np0005539482 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 28 23:34:49 np0005539482 kernel: NX (Execute Disable) protection: active
Nov 28 23:34:49 np0005539482 kernel: APIC: Static calls initialized
Nov 28 23:34:49 np0005539482 kernel: SMBIOS 2.8 present.
Nov 28 23:34:49 np0005539482 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 28 23:34:49 np0005539482 kernel: Hypervisor detected: KVM
Nov 28 23:34:49 np0005539482 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 28 23:34:49 np0005539482 kernel: kvm-clock: using sched offset of 3164990558 cycles
Nov 28 23:34:49 np0005539482 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 28 23:34:49 np0005539482 kernel: tsc: Detected 2799.998 MHz processor
Nov 28 23:34:49 np0005539482 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 28 23:34:49 np0005539482 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 28 23:34:49 np0005539482 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 28 23:34:49 np0005539482 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 28 23:34:49 np0005539482 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 28 23:34:49 np0005539482 kernel: Using GB pages for direct mapping
Nov 28 23:34:49 np0005539482 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 28 23:34:49 np0005539482 kernel: ACPI: Early table checksum verification disabled
Nov 28 23:34:49 np0005539482 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 28 23:34:49 np0005539482 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 23:34:49 np0005539482 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 23:34:49 np0005539482 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 23:34:49 np0005539482 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 28 23:34:49 np0005539482 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 23:34:49 np0005539482 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 23:34:49 np0005539482 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 28 23:34:49 np0005539482 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 28 23:34:49 np0005539482 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 28 23:34:49 np0005539482 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 28 23:34:49 np0005539482 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 28 23:34:49 np0005539482 kernel: No NUMA configuration found
Nov 28 23:34:49 np0005539482 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 28 23:34:49 np0005539482 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 28 23:34:49 np0005539482 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 28 23:34:49 np0005539482 kernel: Zone ranges:
Nov 28 23:34:49 np0005539482 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 28 23:34:49 np0005539482 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 28 23:34:49 np0005539482 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 28 23:34:49 np0005539482 kernel:  Device   empty
Nov 28 23:34:49 np0005539482 kernel: Movable zone start for each node
Nov 28 23:34:49 np0005539482 kernel: Early memory node ranges
Nov 28 23:34:49 np0005539482 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 28 23:34:49 np0005539482 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 28 23:34:49 np0005539482 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 28 23:34:49 np0005539482 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 28 23:34:49 np0005539482 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 28 23:34:49 np0005539482 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 28 23:34:49 np0005539482 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 28 23:34:49 np0005539482 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 28 23:34:49 np0005539482 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 28 23:34:49 np0005539482 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 28 23:34:49 np0005539482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 28 23:34:49 np0005539482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 28 23:34:49 np0005539482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 28 23:34:49 np0005539482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 28 23:34:49 np0005539482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 28 23:34:49 np0005539482 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 28 23:34:49 np0005539482 kernel: TSC deadline timer available
Nov 28 23:34:49 np0005539482 kernel: CPU topo: Max. logical packages:   8
Nov 28 23:34:49 np0005539482 kernel: CPU topo: Max. logical dies:       8
Nov 28 23:34:49 np0005539482 kernel: CPU topo: Max. dies per package:   1
Nov 28 23:34:49 np0005539482 kernel: CPU topo: Max. threads per core:   1
Nov 28 23:34:49 np0005539482 kernel: CPU topo: Num. cores per package:     1
Nov 28 23:34:49 np0005539482 kernel: CPU topo: Num. threads per package:   1
Nov 28 23:34:49 np0005539482 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 28 23:34:49 np0005539482 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 28 23:34:49 np0005539482 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 28 23:34:49 np0005539482 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 28 23:34:49 np0005539482 kernel: Booting paravirtualized kernel on KVM
Nov 28 23:34:49 np0005539482 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 28 23:34:49 np0005539482 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 28 23:34:49 np0005539482 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 28 23:34:49 np0005539482 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 28 23:34:49 np0005539482 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 28 23:34:49 np0005539482 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 28 23:34:49 np0005539482 kernel: random: crng init done
Nov 28 23:34:49 np0005539482 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: Fallback order for Node 0: 0 
Nov 28 23:34:49 np0005539482 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 28 23:34:49 np0005539482 kernel: Policy zone: Normal
Nov 28 23:34:49 np0005539482 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 28 23:34:49 np0005539482 kernel: software IO TLB: area num 8.
Nov 28 23:34:49 np0005539482 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 28 23:34:49 np0005539482 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 28 23:34:49 np0005539482 kernel: ftrace: allocated 193 pages with 3 groups
Nov 28 23:34:49 np0005539482 kernel: Dynamic Preempt: voluntary
Nov 28 23:34:49 np0005539482 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 28 23:34:49 np0005539482 kernel: rcu: #011RCU event tracing is enabled.
Nov 28 23:34:49 np0005539482 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 28 23:34:49 np0005539482 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 28 23:34:49 np0005539482 kernel: #011Rude variant of Tasks RCU enabled.
Nov 28 23:34:49 np0005539482 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 28 23:34:49 np0005539482 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 28 23:34:49 np0005539482 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 28 23:34:49 np0005539482 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 28 23:34:49 np0005539482 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 28 23:34:49 np0005539482 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 28 23:34:49 np0005539482 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 28 23:34:49 np0005539482 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 28 23:34:49 np0005539482 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 28 23:34:49 np0005539482 kernel: Console: colour VGA+ 80x25
Nov 28 23:34:49 np0005539482 kernel: printk: console [ttyS0] enabled
Nov 28 23:34:49 np0005539482 kernel: ACPI: Core revision 20230331
Nov 28 23:34:49 np0005539482 kernel: APIC: Switch to symmetric I/O mode setup
Nov 28 23:34:49 np0005539482 kernel: x2apic enabled
Nov 28 23:34:49 np0005539482 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 28 23:34:49 np0005539482 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 28 23:34:49 np0005539482 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 28 23:34:49 np0005539482 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 28 23:34:49 np0005539482 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 28 23:34:49 np0005539482 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 28 23:34:49 np0005539482 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 28 23:34:49 np0005539482 kernel: Spectre V2 : Mitigation: Retpolines
Nov 28 23:34:49 np0005539482 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 28 23:34:49 np0005539482 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 28 23:34:49 np0005539482 kernel: RETBleed: Mitigation: untrained return thunk
Nov 28 23:34:49 np0005539482 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 28 23:34:49 np0005539482 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 28 23:34:49 np0005539482 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 28 23:34:49 np0005539482 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 28 23:34:49 np0005539482 kernel: x86/bugs: return thunk changed
Nov 28 23:34:49 np0005539482 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 28 23:34:49 np0005539482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 28 23:34:49 np0005539482 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 28 23:34:49 np0005539482 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 28 23:34:49 np0005539482 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 28 23:34:49 np0005539482 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 28 23:34:49 np0005539482 kernel: Freeing SMP alternatives memory: 40K
Nov 28 23:34:49 np0005539482 kernel: pid_max: default: 32768 minimum: 301
Nov 28 23:34:49 np0005539482 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 28 23:34:49 np0005539482 kernel: landlock: Up and running.
Nov 28 23:34:49 np0005539482 kernel: Yama: becoming mindful.
Nov 28 23:34:49 np0005539482 kernel: SELinux:  Initializing.
Nov 28 23:34:49 np0005539482 kernel: LSM support for eBPF active
Nov 28 23:34:49 np0005539482 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 28 23:34:49 np0005539482 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 28 23:34:49 np0005539482 kernel: ... version:                0
Nov 28 23:34:49 np0005539482 kernel: ... bit width:              48
Nov 28 23:34:49 np0005539482 kernel: ... generic registers:      6
Nov 28 23:34:49 np0005539482 kernel: ... value mask:             0000ffffffffffff
Nov 28 23:34:49 np0005539482 kernel: ... max period:             00007fffffffffff
Nov 28 23:34:49 np0005539482 kernel: ... fixed-purpose events:   0
Nov 28 23:34:49 np0005539482 kernel: ... event mask:             000000000000003f
Nov 28 23:34:49 np0005539482 kernel: signal: max sigframe size: 1776
Nov 28 23:34:49 np0005539482 kernel: rcu: Hierarchical SRCU implementation.
Nov 28 23:34:49 np0005539482 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 28 23:34:49 np0005539482 kernel: smp: Bringing up secondary CPUs ...
Nov 28 23:34:49 np0005539482 kernel: smpboot: x86: Booting SMP configuration:
Nov 28 23:34:49 np0005539482 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 28 23:34:49 np0005539482 kernel: smp: Brought up 1 node, 8 CPUs
Nov 28 23:34:49 np0005539482 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 28 23:34:49 np0005539482 kernel: node 0 deferred pages initialised in 10ms
Nov 28 23:34:49 np0005539482 kernel: Memory: 7765680K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616272K reserved, 0K cma-reserved)
Nov 28 23:34:49 np0005539482 kernel: devtmpfs: initialized
Nov 28 23:34:49 np0005539482 kernel: x86/mm: Memory block size: 128MB
Nov 28 23:34:49 np0005539482 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 28 23:34:49 np0005539482 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: pinctrl core: initialized pinctrl subsystem
Nov 28 23:34:49 np0005539482 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 28 23:34:49 np0005539482 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 28 23:34:49 np0005539482 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 28 23:34:49 np0005539482 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 28 23:34:49 np0005539482 kernel: audit: initializing netlink subsys (disabled)
Nov 28 23:34:49 np0005539482 kernel: audit: type=2000 audit(1764390887.095:1): state=initialized audit_enabled=0 res=1
Nov 28 23:34:49 np0005539482 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 28 23:34:49 np0005539482 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 28 23:34:49 np0005539482 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 28 23:34:49 np0005539482 kernel: cpuidle: using governor menu
Nov 28 23:34:49 np0005539482 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 28 23:34:49 np0005539482 kernel: PCI: Using configuration type 1 for base access
Nov 28 23:34:49 np0005539482 kernel: PCI: Using configuration type 1 for extended access
Nov 28 23:34:49 np0005539482 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 28 23:34:49 np0005539482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 28 23:34:49 np0005539482 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 28 23:34:49 np0005539482 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 28 23:34:49 np0005539482 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 28 23:34:49 np0005539482 kernel: Demotion targets for Node 0: null
Nov 28 23:34:49 np0005539482 kernel: cryptd: max_cpu_qlen set to 1000
Nov 28 23:34:49 np0005539482 kernel: ACPI: Added _OSI(Module Device)
Nov 28 23:34:49 np0005539482 kernel: ACPI: Added _OSI(Processor Device)
Nov 28 23:34:49 np0005539482 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 28 23:34:49 np0005539482 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 28 23:34:49 np0005539482 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 28 23:34:49 np0005539482 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 28 23:34:49 np0005539482 kernel: ACPI: Interpreter enabled
Nov 28 23:34:49 np0005539482 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 28 23:34:49 np0005539482 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 28 23:34:49 np0005539482 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 28 23:34:49 np0005539482 kernel: PCI: Using E820 reservations for host bridge windows
Nov 28 23:34:49 np0005539482 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 28 23:34:49 np0005539482 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 28 23:34:49 np0005539482 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [3] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [4] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [5] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [6] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [7] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [8] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [9] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [10] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [11] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [12] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [13] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [14] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [15] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [16] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [17] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [18] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [19] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [20] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [21] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [22] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [23] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [24] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [25] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [26] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [27] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [28] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [29] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [30] registered
Nov 28 23:34:49 np0005539482 kernel: acpiphp: Slot [31] registered
Nov 28 23:34:49 np0005539482 kernel: PCI host bridge to bus 0000:00
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 28 23:34:49 np0005539482 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 28 23:34:49 np0005539482 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 28 23:34:49 np0005539482 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 28 23:34:49 np0005539482 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 28 23:34:49 np0005539482 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 28 23:34:49 np0005539482 kernel: iommu: Default domain type: Translated
Nov 28 23:34:49 np0005539482 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 28 23:34:49 np0005539482 kernel: SCSI subsystem initialized
Nov 28 23:34:49 np0005539482 kernel: ACPI: bus type USB registered
Nov 28 23:34:49 np0005539482 kernel: usbcore: registered new interface driver usbfs
Nov 28 23:34:49 np0005539482 kernel: usbcore: registered new interface driver hub
Nov 28 23:34:49 np0005539482 kernel: usbcore: registered new device driver usb
Nov 28 23:34:49 np0005539482 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 28 23:34:49 np0005539482 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 28 23:34:49 np0005539482 kernel: PTP clock support registered
Nov 28 23:34:49 np0005539482 kernel: EDAC MC: Ver: 3.0.0
Nov 28 23:34:49 np0005539482 kernel: NetLabel: Initializing
Nov 28 23:34:49 np0005539482 kernel: NetLabel:  domain hash size = 128
Nov 28 23:34:49 np0005539482 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 28 23:34:49 np0005539482 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 28 23:34:49 np0005539482 kernel: PCI: Using ACPI for IRQ routing
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 28 23:34:49 np0005539482 kernel: vgaarb: loaded
Nov 28 23:34:49 np0005539482 kernel: clocksource: Switched to clocksource kvm-clock
Nov 28 23:34:49 np0005539482 kernel: VFS: Disk quotas dquot_6.6.0
Nov 28 23:34:49 np0005539482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 28 23:34:49 np0005539482 kernel: pnp: PnP ACPI init
Nov 28 23:34:49 np0005539482 kernel: pnp: PnP ACPI: found 5 devices
Nov 28 23:34:49 np0005539482 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 28 23:34:49 np0005539482 kernel: NET: Registered PF_INET protocol family
Nov 28 23:34:49 np0005539482 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 28 23:34:49 np0005539482 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 28 23:34:49 np0005539482 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 28 23:34:49 np0005539482 kernel: NET: Registered PF_XDP protocol family
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 28 23:34:49 np0005539482 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 28 23:34:49 np0005539482 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 28 23:34:49 np0005539482 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 98104 usecs
Nov 28 23:34:49 np0005539482 kernel: PCI: CLS 0 bytes, default 64
Nov 28 23:34:49 np0005539482 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 28 23:34:49 np0005539482 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 28 23:34:49 np0005539482 kernel: ACPI: bus type thunderbolt registered
Nov 28 23:34:49 np0005539482 kernel: Trying to unpack rootfs image as initramfs...
Nov 28 23:34:49 np0005539482 kernel: Initialise system trusted keyrings
Nov 28 23:34:49 np0005539482 kernel: Key type blacklist registered
Nov 28 23:34:49 np0005539482 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 28 23:34:49 np0005539482 kernel: zbud: loaded
Nov 28 23:34:49 np0005539482 kernel: integrity: Platform Keyring initialized
Nov 28 23:34:49 np0005539482 kernel: integrity: Machine keyring initialized
Nov 28 23:34:49 np0005539482 kernel: Freeing initrd memory: 85868K
Nov 28 23:34:49 np0005539482 kernel: NET: Registered PF_ALG protocol family
Nov 28 23:34:49 np0005539482 kernel: xor: automatically using best checksumming function   avx       
Nov 28 23:34:49 np0005539482 kernel: Key type asymmetric registered
Nov 28 23:34:49 np0005539482 kernel: Asymmetric key parser 'x509' registered
Nov 28 23:34:49 np0005539482 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 28 23:34:49 np0005539482 kernel: io scheduler mq-deadline registered
Nov 28 23:34:49 np0005539482 kernel: io scheduler kyber registered
Nov 28 23:34:49 np0005539482 kernel: io scheduler bfq registered
Nov 28 23:34:49 np0005539482 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 28 23:34:49 np0005539482 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 28 23:34:49 np0005539482 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 28 23:34:49 np0005539482 kernel: ACPI: button: Power Button [PWRF]
Nov 28 23:34:49 np0005539482 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 28 23:34:49 np0005539482 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 28 23:34:49 np0005539482 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 28 23:34:49 np0005539482 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 28 23:34:49 np0005539482 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 28 23:34:49 np0005539482 kernel: Non-volatile memory driver v1.3
Nov 28 23:34:49 np0005539482 kernel: rdac: device handler registered
Nov 28 23:34:49 np0005539482 kernel: hp_sw: device handler registered
Nov 28 23:34:49 np0005539482 kernel: emc: device handler registered
Nov 28 23:34:49 np0005539482 kernel: alua: device handler registered
Nov 28 23:34:49 np0005539482 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 28 23:34:49 np0005539482 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 28 23:34:49 np0005539482 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 28 23:34:49 np0005539482 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 28 23:34:49 np0005539482 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 28 23:34:49 np0005539482 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 28 23:34:49 np0005539482 kernel: usb usb1: Product: UHCI Host Controller
Nov 28 23:34:49 np0005539482 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 28 23:34:49 np0005539482 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 28 23:34:49 np0005539482 kernel: hub 1-0:1.0: USB hub found
Nov 28 23:34:49 np0005539482 kernel: hub 1-0:1.0: 2 ports detected
Nov 28 23:34:49 np0005539482 kernel: usbcore: registered new interface driver usbserial_generic
Nov 28 23:34:49 np0005539482 kernel: usbserial: USB Serial support registered for generic
Nov 28 23:34:49 np0005539482 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 28 23:34:49 np0005539482 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 28 23:34:49 np0005539482 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 28 23:34:49 np0005539482 kernel: mousedev: PS/2 mouse device common for all mice
Nov 28 23:34:49 np0005539482 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 28 23:34:49 np0005539482 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 28 23:34:49 np0005539482 kernel: rtc_cmos 00:04: registered as rtc0
Nov 28 23:34:49 np0005539482 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T04:34:48 UTC (1764390888)
Nov 28 23:34:49 np0005539482 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 28 23:34:49 np0005539482 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 28 23:34:49 np0005539482 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 28 23:34:49 np0005539482 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 28 23:34:49 np0005539482 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 28 23:34:49 np0005539482 kernel: usbcore: registered new interface driver usbhid
Nov 28 23:34:49 np0005539482 kernel: usbhid: USB HID core driver
Nov 28 23:34:49 np0005539482 kernel: drop_monitor: Initializing network drop monitor service
Nov 28 23:34:49 np0005539482 kernel: Initializing XFRM netlink socket
Nov 28 23:34:49 np0005539482 kernel: NET: Registered PF_INET6 protocol family
Nov 28 23:34:49 np0005539482 kernel: Segment Routing with IPv6
Nov 28 23:34:49 np0005539482 kernel: NET: Registered PF_PACKET protocol family
Nov 28 23:34:49 np0005539482 kernel: mpls_gso: MPLS GSO support
Nov 28 23:34:49 np0005539482 kernel: IPI shorthand broadcast: enabled
Nov 28 23:34:49 np0005539482 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 28 23:34:49 np0005539482 kernel: AES CTR mode by8 optimization enabled
Nov 28 23:34:49 np0005539482 kernel: sched_clock: Marking stable (1189010823, 149767947)->(1457179421, -118400651)
Nov 28 23:34:49 np0005539482 kernel: registered taskstats version 1
Nov 28 23:34:49 np0005539482 kernel: Loading compiled-in X.509 certificates
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 28 23:34:49 np0005539482 kernel: Demotion targets for Node 0: null
Nov 28 23:34:49 np0005539482 kernel: page_owner is disabled
Nov 28 23:34:49 np0005539482 kernel: Key type .fscrypt registered
Nov 28 23:34:49 np0005539482 kernel: Key type fscrypt-provisioning registered
Nov 28 23:34:49 np0005539482 kernel: Key type big_key registered
Nov 28 23:34:49 np0005539482 kernel: Key type encrypted registered
Nov 28 23:34:49 np0005539482 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 28 23:34:49 np0005539482 kernel: Loading compiled-in module X.509 certificates
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 28 23:34:49 np0005539482 kernel: ima: Allocated hash algorithm: sha256
Nov 28 23:34:49 np0005539482 kernel: ima: No architecture policies found
Nov 28 23:34:49 np0005539482 kernel: evm: Initialising EVM extended attributes:
Nov 28 23:34:49 np0005539482 kernel: evm: security.selinux
Nov 28 23:34:49 np0005539482 kernel: evm: security.SMACK64 (disabled)
Nov 28 23:34:49 np0005539482 kernel: evm: security.SMACK64EXEC (disabled)
Nov 28 23:34:49 np0005539482 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 28 23:34:49 np0005539482 kernel: evm: security.SMACK64MMAP (disabled)
Nov 28 23:34:49 np0005539482 kernel: evm: security.apparmor (disabled)
Nov 28 23:34:49 np0005539482 kernel: evm: security.ima
Nov 28 23:34:49 np0005539482 kernel: evm: security.capability
Nov 28 23:34:49 np0005539482 kernel: evm: HMAC attrs: 0x1
Nov 28 23:34:49 np0005539482 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 28 23:34:49 np0005539482 kernel: Running certificate verification RSA selftest
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 28 23:34:49 np0005539482 kernel: Running certificate verification ECDSA selftest
Nov 28 23:34:49 np0005539482 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 28 23:34:49 np0005539482 kernel: clk: Disabling unused clocks
Nov 28 23:34:49 np0005539482 kernel: Freeing unused decrypted memory: 2028K
Nov 28 23:34:49 np0005539482 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 28 23:34:49 np0005539482 kernel: Write protecting the kernel read-only data: 30720k
Nov 28 23:34:49 np0005539482 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 28 23:34:49 np0005539482 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 28 23:34:49 np0005539482 kernel: Run /init as init process
Nov 28 23:34:49 np0005539482 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 28 23:34:49 np0005539482 systemd: Detected virtualization kvm.
Nov 28 23:34:49 np0005539482 systemd: Detected architecture x86-64.
Nov 28 23:34:49 np0005539482 systemd: Running in initrd.
Nov 28 23:34:49 np0005539482 systemd: No hostname configured, using default hostname.
Nov 28 23:34:49 np0005539482 systemd: Hostname set to <localhost>.
Nov 28 23:34:49 np0005539482 systemd: Initializing machine ID from VM UUID.
Nov 28 23:34:49 np0005539482 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 28 23:34:49 np0005539482 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 28 23:34:49 np0005539482 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 28 23:34:49 np0005539482 kernel: usb 1-1: Manufacturer: QEMU
Nov 28 23:34:49 np0005539482 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 28 23:34:49 np0005539482 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 28 23:34:49 np0005539482 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 28 23:34:49 np0005539482 systemd: Queued start job for default target Initrd Default Target.
Nov 28 23:34:49 np0005539482 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 28 23:34:49 np0005539482 systemd: Reached target Local Encrypted Volumes.
Nov 28 23:34:49 np0005539482 systemd: Reached target Initrd /usr File System.
Nov 28 23:34:49 np0005539482 systemd: Reached target Local File Systems.
Nov 28 23:34:49 np0005539482 systemd: Reached target Path Units.
Nov 28 23:34:49 np0005539482 systemd: Reached target Slice Units.
Nov 28 23:34:49 np0005539482 systemd: Reached target Swaps.
Nov 28 23:34:49 np0005539482 systemd: Reached target Timer Units.
Nov 28 23:34:49 np0005539482 systemd: Listening on D-Bus System Message Bus Socket.
Nov 28 23:34:49 np0005539482 systemd: Listening on Journal Socket (/dev/log).
Nov 28 23:34:49 np0005539482 systemd: Listening on Journal Socket.
Nov 28 23:34:49 np0005539482 systemd: Listening on udev Control Socket.
Nov 28 23:34:49 np0005539482 systemd: Listening on udev Kernel Socket.
Nov 28 23:34:49 np0005539482 systemd: Reached target Socket Units.
Nov 28 23:34:49 np0005539482 systemd: Starting Create List of Static Device Nodes...
Nov 28 23:34:49 np0005539482 systemd: Starting Journal Service...
Nov 28 23:34:49 np0005539482 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 28 23:34:49 np0005539482 systemd: Starting Apply Kernel Variables...
Nov 28 23:34:49 np0005539482 systemd: Starting Create System Users...
Nov 28 23:34:49 np0005539482 systemd: Starting Setup Virtual Console...
Nov 28 23:34:49 np0005539482 systemd: Finished Create List of Static Device Nodes.
Nov 28 23:34:49 np0005539482 systemd: Finished Apply Kernel Variables.
Nov 28 23:34:49 np0005539482 systemd: Finished Create System Users.
Nov 28 23:34:49 np0005539482 systemd-journald[306]: Journal started
Nov 28 23:34:49 np0005539482 systemd-journald[306]: Runtime Journal (/run/log/journal/60584de4e08041489fd937c7db79f006) is 8.0M, max 153.6M, 145.6M free.
Nov 28 23:34:49 np0005539482 systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 28 23:34:49 np0005539482 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 28 23:34:49 np0005539482 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 28 23:34:49 np0005539482 systemd: Started Journal Service.
Nov 28 23:34:49 np0005539482 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 28 23:34:49 np0005539482 systemd[1]: Starting Create Volatile Files and Directories...
Nov 28 23:34:49 np0005539482 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 28 23:34:49 np0005539482 systemd[1]: Finished Setup Virtual Console.
Nov 28 23:34:49 np0005539482 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 28 23:34:49 np0005539482 systemd[1]: Starting dracut cmdline hook...
Nov 28 23:34:49 np0005539482 systemd[1]: Finished Create Volatile Files and Directories.
Nov 28 23:34:49 np0005539482 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Nov 28 23:34:49 np0005539482 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 28 23:34:49 np0005539482 systemd[1]: Finished dracut cmdline hook.
Nov 28 23:34:49 np0005539482 systemd[1]: Starting dracut pre-udev hook...
Nov 28 23:34:49 np0005539482 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 28 23:34:49 np0005539482 kernel: device-mapper: uevent: version 1.0.3
Nov 28 23:34:49 np0005539482 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 28 23:34:49 np0005539482 kernel: RPC: Registered named UNIX socket transport module.
Nov 28 23:34:49 np0005539482 kernel: RPC: Registered udp transport module.
Nov 28 23:34:49 np0005539482 kernel: RPC: Registered tcp transport module.
Nov 28 23:34:49 np0005539482 kernel: RPC: Registered tcp-with-tls transport module.
Nov 28 23:34:49 np0005539482 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 28 23:34:49 np0005539482 rpc.statd[443]: Version 2.5.4 starting
Nov 28 23:34:49 np0005539482 rpc.statd[443]: Initializing NSM state
Nov 28 23:34:49 np0005539482 rpc.idmapd[448]: Setting log level to 0
Nov 28 23:34:49 np0005539482 systemd[1]: Finished dracut pre-udev hook.
Nov 28 23:34:49 np0005539482 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 28 23:34:49 np0005539482 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 28 23:34:49 np0005539482 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 28 23:34:49 np0005539482 systemd[1]: Starting dracut pre-trigger hook...
Nov 28 23:34:49 np0005539482 systemd[1]: Finished dracut pre-trigger hook.
Nov 28 23:34:49 np0005539482 systemd[1]: Starting Coldplug All udev Devices...
Nov 28 23:34:49 np0005539482 systemd[1]: Created slice Slice /system/modprobe.
Nov 28 23:34:49 np0005539482 systemd[1]: Starting Load Kernel Module configfs...
Nov 28 23:34:49 np0005539482 systemd[1]: Finished Coldplug All udev Devices.
Nov 28 23:34:49 np0005539482 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 28 23:34:49 np0005539482 systemd[1]: Finished Load Kernel Module configfs.
Nov 28 23:34:49 np0005539482 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 28 23:34:49 np0005539482 systemd[1]: Reached target Network.
Nov 28 23:34:49 np0005539482 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 28 23:34:49 np0005539482 systemd[1]: Starting dracut initqueue hook...
Nov 28 23:34:49 np0005539482 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 28 23:34:49 np0005539482 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 28 23:34:49 np0005539482 kernel: vda: vda1
Nov 28 23:34:49 np0005539482 kernel: scsi host0: ata_piix
Nov 28 23:34:49 np0005539482 kernel: scsi host1: ata_piix
Nov 28 23:34:49 np0005539482 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 28 23:34:49 np0005539482 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 28 23:34:49 np0005539482 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 28 23:34:49 np0005539482 systemd[1]: Reached target Initrd Root Device.
Nov 28 23:34:50 np0005539482 systemd[1]: Mounting Kernel Configuration File System...
Nov 28 23:34:50 np0005539482 kernel: ata1: found unknown device (class 0)
Nov 28 23:34:50 np0005539482 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 28 23:34:50 np0005539482 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 28 23:34:50 np0005539482 systemd-udevd[493]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 23:34:50 np0005539482 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 28 23:34:50 np0005539482 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 28 23:34:50 np0005539482 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 28 23:34:50 np0005539482 systemd[1]: Mounted Kernel Configuration File System.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target System Initialization.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target Basic System.
Nov 28 23:34:50 np0005539482 systemd[1]: Finished dracut initqueue hook.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target Remote File Systems.
Nov 28 23:34:50 np0005539482 systemd[1]: Starting dracut pre-mount hook...
Nov 28 23:34:50 np0005539482 systemd[1]: Finished dracut pre-mount hook.
Nov 28 23:34:50 np0005539482 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 28 23:34:50 np0005539482 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 28 23:34:50 np0005539482 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 28 23:34:50 np0005539482 systemd[1]: Mounting /sysroot...
Nov 28 23:34:50 np0005539482 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 28 23:34:50 np0005539482 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 28 23:34:50 np0005539482 kernel: XFS (vda1): Ending clean mount
Nov 28 23:34:50 np0005539482 systemd[1]: Mounted /sysroot.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target Initrd Root File System.
Nov 28 23:34:50 np0005539482 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 28 23:34:50 np0005539482 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 28 23:34:50 np0005539482 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target Initrd File Systems.
Nov 28 23:34:50 np0005539482 systemd[1]: Reached target Initrd Default Target.
Nov 28 23:34:50 np0005539482 systemd[1]: Starting dracut mount hook...
Nov 28 23:34:50 np0005539482 systemd[1]: Finished dracut mount hook.
Nov 28 23:34:50 np0005539482 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 28 23:34:51 np0005539482 rpc.idmapd[448]: exiting on signal 15
Nov 28 23:34:51 np0005539482 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Network.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Timer Units.
Nov 28 23:34:51 np0005539482 systemd[1]: dbus.socket: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 28 23:34:51 np0005539482 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Initrd Default Target.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Basic System.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Initrd Root Device.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Initrd /usr File System.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Path Units.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Remote File Systems.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Slice Units.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Socket Units.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target System Initialization.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Local File Systems.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Swaps.
Nov 28 23:34:51 np0005539482 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped dracut mount hook.
Nov 28 23:34:51 np0005539482 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped dracut pre-mount hook.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 28 23:34:51 np0005539482 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped dracut initqueue hook.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Apply Kernel Variables.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Coldplug All udev Devices.
Nov 28 23:34:51 np0005539482 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped dracut pre-trigger hook.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Setup Virtual Console.
Nov 28 23:34:51 np0005539482 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Closed udev Control Socket.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Closed udev Kernel Socket.
Nov 28 23:34:51 np0005539482 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped dracut pre-udev hook.
Nov 28 23:34:51 np0005539482 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped dracut cmdline hook.
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Cleanup udev Database...
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 28 23:34:51 np0005539482 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Stopped Create System Users.
Nov 28 23:34:51 np0005539482 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Cleanup udev Database.
Nov 28 23:34:51 np0005539482 systemd[1]: Reached target Switch Root.
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Switch Root...
Nov 28 23:34:51 np0005539482 systemd[1]: Switching root.
Nov 28 23:34:51 np0005539482 systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Nov 28 23:34:51 np0005539482 systemd-journald[306]: Journal stopped
Nov 28 23:34:51 np0005539482 kernel: audit: type=1404 audit(1764390891.243:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 28 23:34:51 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 23:34:51 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 28 23:34:51 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 23:34:51 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 28 23:34:51 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 23:34:51 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 23:34:51 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 23:34:51 np0005539482 kernel: audit: type=1403 audit(1764390891.379:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 28 23:34:51 np0005539482 systemd: Successfully loaded SELinux policy in 139.192ms.
Nov 28 23:34:51 np0005539482 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.596ms.
Nov 28 23:34:51 np0005539482 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 28 23:34:51 np0005539482 systemd: Detected virtualization kvm.
Nov 28 23:34:51 np0005539482 systemd: Detected architecture x86-64.
Nov 28 23:34:51 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 23:34:51 np0005539482 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd: Stopped Switch Root.
Nov 28 23:34:51 np0005539482 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 28 23:34:51 np0005539482 systemd: Created slice Slice /system/getty.
Nov 28 23:34:51 np0005539482 systemd: Created slice Slice /system/serial-getty.
Nov 28 23:34:51 np0005539482 systemd: Created slice Slice /system/sshd-keygen.
Nov 28 23:34:51 np0005539482 systemd: Created slice User and Session Slice.
Nov 28 23:34:51 np0005539482 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 28 23:34:51 np0005539482 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 28 23:34:51 np0005539482 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 28 23:34:51 np0005539482 systemd: Reached target Local Encrypted Volumes.
Nov 28 23:34:51 np0005539482 systemd: Stopped target Switch Root.
Nov 28 23:34:51 np0005539482 systemd: Stopped target Initrd File Systems.
Nov 28 23:34:51 np0005539482 systemd: Stopped target Initrd Root File System.
Nov 28 23:34:51 np0005539482 systemd: Reached target Local Integrity Protected Volumes.
Nov 28 23:34:51 np0005539482 systemd: Reached target Path Units.
Nov 28 23:34:51 np0005539482 systemd: Reached target rpc_pipefs.target.
Nov 28 23:34:51 np0005539482 systemd: Reached target Slice Units.
Nov 28 23:34:51 np0005539482 systemd: Reached target Swaps.
Nov 28 23:34:51 np0005539482 systemd: Reached target Local Verity Protected Volumes.
Nov 28 23:34:51 np0005539482 systemd: Listening on RPCbind Server Activation Socket.
Nov 28 23:34:51 np0005539482 systemd: Reached target RPC Port Mapper.
Nov 28 23:34:51 np0005539482 systemd: Listening on Process Core Dump Socket.
Nov 28 23:34:51 np0005539482 systemd: Listening on initctl Compatibility Named Pipe.
Nov 28 23:34:51 np0005539482 systemd: Listening on udev Control Socket.
Nov 28 23:34:51 np0005539482 systemd: Listening on udev Kernel Socket.
Nov 28 23:34:51 np0005539482 systemd: Mounting Huge Pages File System...
Nov 28 23:34:51 np0005539482 systemd: Mounting POSIX Message Queue File System...
Nov 28 23:34:51 np0005539482 systemd: Mounting Kernel Debug File System...
Nov 28 23:34:51 np0005539482 systemd: Mounting Kernel Trace File System...
Nov 28 23:34:51 np0005539482 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 28 23:34:51 np0005539482 systemd: Starting Create List of Static Device Nodes...
Nov 28 23:34:51 np0005539482 systemd: Starting Load Kernel Module configfs...
Nov 28 23:34:51 np0005539482 systemd: Starting Load Kernel Module drm...
Nov 28 23:34:51 np0005539482 systemd: Starting Load Kernel Module efi_pstore...
Nov 28 23:34:51 np0005539482 systemd: Starting Load Kernel Module fuse...
Nov 28 23:34:51 np0005539482 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 28 23:34:51 np0005539482 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd: Stopped File System Check on Root Device.
Nov 28 23:34:51 np0005539482 systemd: Stopped Journal Service.
Nov 28 23:34:51 np0005539482 systemd: Starting Journal Service...
Nov 28 23:34:51 np0005539482 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 28 23:34:51 np0005539482 systemd: Starting Generate network units from Kernel command line...
Nov 28 23:34:51 np0005539482 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 28 23:34:51 np0005539482 systemd: Starting Remount Root and Kernel File Systems...
Nov 28 23:34:51 np0005539482 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 28 23:34:51 np0005539482 kernel: fuse: init (API version 7.37)
Nov 28 23:34:51 np0005539482 systemd: Starting Apply Kernel Variables...
Nov 28 23:34:51 np0005539482 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 28 23:34:51 np0005539482 systemd: Starting Coldplug All udev Devices...
Nov 28 23:34:51 np0005539482 systemd: Mounted Huge Pages File System.
Nov 28 23:34:51 np0005539482 systemd: Mounted POSIX Message Queue File System.
Nov 28 23:34:51 np0005539482 systemd: Mounted Kernel Debug File System.
Nov 28 23:34:51 np0005539482 systemd-journald[677]: Journal started
Nov 28 23:34:51 np0005539482 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 28 23:34:51 np0005539482 systemd[1]: Queued start job for default target Multi-User System.
Nov 28 23:34:51 np0005539482 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd: Started Journal Service.
Nov 28 23:34:51 np0005539482 systemd[1]: Mounted Kernel Trace File System.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Create List of Static Device Nodes.
Nov 28 23:34:51 np0005539482 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Load Kernel Module configfs.
Nov 28 23:34:51 np0005539482 kernel: ACPI: bus type drm_connector registered
Nov 28 23:34:51 np0005539482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 28 23:34:51 np0005539482 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Load Kernel Module drm.
Nov 28 23:34:51 np0005539482 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Load Kernel Module fuse.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Generate network units from Kernel command line.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Apply Kernel Variables.
Nov 28 23:34:51 np0005539482 systemd[1]: Mounting FUSE Control File System...
Nov 28 23:34:51 np0005539482 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Rebuild Hardware Database...
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 28 23:34:51 np0005539482 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Load/Save OS Random Seed...
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Create System Users...
Nov 28 23:34:51 np0005539482 systemd[1]: Mounted FUSE Control File System.
Nov 28 23:34:51 np0005539482 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 28 23:34:51 np0005539482 systemd-journald[677]: Received client request to flush runtime journal.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Load/Save OS Random Seed.
Nov 28 23:34:51 np0005539482 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Create System Users.
Nov 28 23:34:51 np0005539482 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Coldplug All udev Devices.
Nov 28 23:34:51 np0005539482 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 28 23:34:51 np0005539482 systemd[1]: Reached target Preparation for Local File Systems.
Nov 28 23:34:51 np0005539482 systemd[1]: Reached target Local File Systems.
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 28 23:34:52 np0005539482 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 28 23:34:52 np0005539482 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 28 23:34:52 np0005539482 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Automatic Boot Loader Update...
Nov 28 23:34:52 np0005539482 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Create Volatile Files and Directories...
Nov 28 23:34:52 np0005539482 bootctl[694]: Couldn't find EFI system partition, skipping.
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Automatic Boot Loader Update.
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Create Volatile Files and Directories.
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Security Auditing Service...
Nov 28 23:34:52 np0005539482 systemd[1]: Starting RPC Bind...
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Rebuild Journal Catalog...
Nov 28 23:34:52 np0005539482 auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 28 23:34:52 np0005539482 auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Rebuild Journal Catalog.
Nov 28 23:34:52 np0005539482 augenrules[705]: /sbin/augenrules: No change
Nov 28 23:34:52 np0005539482 augenrules[720]: No rules
Nov 28 23:34:52 np0005539482 augenrules[720]: enabled 1
Nov 28 23:34:52 np0005539482 augenrules[720]: failure 1
Nov 28 23:34:52 np0005539482 augenrules[720]: pid 700
Nov 28 23:34:52 np0005539482 augenrules[720]: rate_limit 0
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_limit 8192
Nov 28 23:34:52 np0005539482 augenrules[720]: lost 0
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog 3
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_wait_time 60000
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_wait_time_actual 0
Nov 28 23:34:52 np0005539482 augenrules[720]: enabled 1
Nov 28 23:34:52 np0005539482 augenrules[720]: failure 1
Nov 28 23:34:52 np0005539482 augenrules[720]: pid 700
Nov 28 23:34:52 np0005539482 augenrules[720]: rate_limit 0
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_limit 8192
Nov 28 23:34:52 np0005539482 augenrules[720]: lost 0
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog 0
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_wait_time 60000
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_wait_time_actual 0
Nov 28 23:34:52 np0005539482 augenrules[720]: enabled 1
Nov 28 23:34:52 np0005539482 augenrules[720]: failure 1
Nov 28 23:34:52 np0005539482 augenrules[720]: pid 700
Nov 28 23:34:52 np0005539482 augenrules[720]: rate_limit 0
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_limit 8192
Nov 28 23:34:52 np0005539482 augenrules[720]: lost 0
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog 1
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_wait_time 60000
Nov 28 23:34:52 np0005539482 augenrules[720]: backlog_wait_time_actual 0
Nov 28 23:34:52 np0005539482 systemd[1]: Started Security Auditing Service.
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 28 23:34:52 np0005539482 systemd[1]: Started RPC Bind.
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Rebuild Hardware Database.
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Update is Completed...
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Update is Completed.
Nov 28 23:34:52 np0005539482 systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Nov 28 23:34:52 np0005539482 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 28 23:34:52 np0005539482 systemd[1]: Reached target System Initialization.
Nov 28 23:34:52 np0005539482 systemd[1]: Started dnf makecache --timer.
Nov 28 23:34:52 np0005539482 systemd[1]: Started Daily rotation of log files.
Nov 28 23:34:52 np0005539482 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 28 23:34:52 np0005539482 systemd[1]: Reached target Timer Units.
Nov 28 23:34:52 np0005539482 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 28 23:34:52 np0005539482 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 28 23:34:52 np0005539482 systemd[1]: Reached target Socket Units.
Nov 28 23:34:52 np0005539482 systemd[1]: Starting D-Bus System Message Bus...
Nov 28 23:34:52 np0005539482 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Load Kernel Module configfs...
Nov 28 23:34:52 np0005539482 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Load Kernel Module configfs.
Nov 28 23:34:52 np0005539482 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 28 23:34:52 np0005539482 systemd[1]: Started D-Bus System Message Bus.
Nov 28 23:34:52 np0005539482 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 23:34:52 np0005539482 systemd[1]: Reached target Basic System.
Nov 28 23:34:52 np0005539482 dbus-broker-lau[743]: Ready
Nov 28 23:34:52 np0005539482 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 28 23:34:52 np0005539482 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 28 23:34:52 np0005539482 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 28 23:34:52 np0005539482 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 28 23:34:52 np0005539482 systemd[1]: Starting NTP client/server...
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 28 23:34:52 np0005539482 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 28 23:34:52 np0005539482 systemd[1]: Starting IPv4 firewall with iptables...
Nov 28 23:34:52 np0005539482 systemd[1]: Started irqbalance daemon.
Nov 28 23:34:52 np0005539482 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 28 23:34:52 np0005539482 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 28 23:34:52 np0005539482 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 28 23:34:52 np0005539482 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 23:34:52 np0005539482 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 23:34:52 np0005539482 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 23:34:52 np0005539482 systemd[1]: Reached target sshd-keygen.target.
Nov 28 23:34:52 np0005539482 kernel: Console: switching to colour dummy device 80x25
Nov 28 23:34:52 np0005539482 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 28 23:34:52 np0005539482 kernel: [drm] features: -context_init
Nov 28 23:34:52 np0005539482 chronyd[785]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 28 23:34:52 np0005539482 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 28 23:34:52 np0005539482 systemd[1]: Reached target User and Group Name Lookups.
Nov 28 23:34:52 np0005539482 chronyd[785]: Loaded 0 symmetric keys
Nov 28 23:34:52 np0005539482 chronyd[785]: Using right/UTC timezone to obtain leap second data
Nov 28 23:34:52 np0005539482 chronyd[785]: Loaded seccomp filter (level 2)
Nov 28 23:34:52 np0005539482 kernel: [drm] number of scanouts: 1
Nov 28 23:34:52 np0005539482 kernel: [drm] number of cap sets: 0
Nov 28 23:34:52 np0005539482 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 28 23:34:52 np0005539482 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 28 23:34:52 np0005539482 kernel: Console: switching to colour frame buffer device 128x48
Nov 28 23:34:52 np0005539482 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 28 23:34:52 np0005539482 systemd[1]: Starting User Login Management...
Nov 28 23:34:52 np0005539482 systemd[1]: Started NTP client/server.
Nov 28 23:34:52 np0005539482 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 28 23:34:52 np0005539482 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 28 23:34:52 np0005539482 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 28 23:34:52 np0005539482 systemd-logind[793]: New seat seat0.
Nov 28 23:34:52 np0005539482 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 28 23:34:52 np0005539482 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 28 23:34:52 np0005539482 systemd[1]: Started User Login Management.
Nov 28 23:34:52 np0005539482 kernel: kvm_amd: TSC scaling supported
Nov 28 23:34:52 np0005539482 kernel: kvm_amd: Nested Virtualization enabled
Nov 28 23:34:52 np0005539482 kernel: kvm_amd: Nested Paging enabled
Nov 28 23:34:52 np0005539482 kernel: kvm_amd: LBR virtualization supported
Nov 28 23:34:52 np0005539482 iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Nov 28 23:34:53 np0005539482 systemd[1]: Finished IPv4 firewall with iptables.
Nov 28 23:34:53 np0005539482 cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 04:34:53 +0000. Up 5.77 seconds.
Nov 28 23:34:53 np0005539482 systemd[1]: run-cloud\x2dinit-tmp-tmp5u03kq9d.mount: Deactivated successfully.
Nov 28 23:34:53 np0005539482 systemd[1]: Starting Hostname Service...
Nov 28 23:34:53 np0005539482 systemd[1]: Started Hostname Service.
Nov 28 23:34:53 np0005539482 systemd-hostnamed[852]: Hostname set to <np0005539482.novalocal> (static)
Nov 28 23:34:53 np0005539482 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 28 23:34:53 np0005539482 systemd[1]: Reached target Preparation for Network.
Nov 28 23:34:53 np0005539482 systemd[1]: Starting Network Manager...
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6486] NetworkManager (version 1.54.1-1.el9) is starting... (boot:919d61e4-148b-4df4-a773-feb4933c1c42)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6491] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6567] manager[0x5625a18a4080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6606] hostname: hostname: using hostnamed
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6606] hostname: static hostname changed from (none) to "np0005539482.novalocal"
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6612] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6744] manager[0x5625a18a4080]: rfkill: Wi-Fi hardware radio set enabled
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6747] manager[0x5625a18a4080]: rfkill: WWAN hardware radio set enabled
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6790] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6790] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 28 23:34:53 np0005539482 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6791] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6794] manager: Networking is enabled by state file
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6796] settings: Loaded settings plugin: keyfile (internal)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6808] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6833] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6846] dhcp: init: Using DHCP client 'internal'
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6849] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6861] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6868] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6876] device (lo): Activation: starting connection 'lo' (aeac58a6-e034-4337-948c-d58870c36302)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6885] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6889] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6916] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6921] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6923] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6925] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6927] device (eth0): carrier: link connected
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6931] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6937] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6942] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6945] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6946] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6949] manager: NetworkManager state is now CONNECTING
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6951] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6957] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.6960] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 23:34:53 np0005539482 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 23:34:53 np0005539482 systemd[1]: Started Network Manager.
Nov 28 23:34:53 np0005539482 systemd[1]: Reached target Network.
Nov 28 23:34:53 np0005539482 systemd[1]: Starting Network Manager Wait Online...
Nov 28 23:34:53 np0005539482 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 28 23:34:53 np0005539482 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7236] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7239] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7248] device (lo): Activation: successful, device activated.
Nov 28 23:34:53 np0005539482 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 28 23:34:53 np0005539482 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 28 23:34:53 np0005539482 systemd[1]: Reached target NFS client services.
Nov 28 23:34:53 np0005539482 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 28 23:34:53 np0005539482 systemd[1]: Reached target Remote File Systems.
Nov 28 23:34:53 np0005539482 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7616] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7629] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7652] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7670] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7671] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7674] manager: NetworkManager state is now CONNECTED_SITE
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7677] device (eth0): Activation: successful, device activated.
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7683] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 28 23:34:53 np0005539482 NetworkManager[856]: <info>  [1764390893.7686] manager: startup complete
Nov 28 23:34:53 np0005539482 systemd[1]: Finished Network Manager Wait Online.
Nov 28 23:34:53 np0005539482 systemd[1]: Starting Cloud-init: Network Stage...
Nov 28 23:34:54 np0005539482 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 04:34:54 +0000. Up 6.73 seconds.
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |  eth0  | True |         38.102.83.17         | 255.255.255.0 | global | fa:16:3e:1f:f5:ec |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe1f:f5ec/64 |       .       |  link  | fa:16:3e:1f:f5:ec |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 28 23:34:54 np0005539482 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 28 23:34:55 np0005539482 cloud-init[920]: Generating public/private rsa key pair.
Nov 28 23:34:55 np0005539482 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 28 23:34:55 np0005539482 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 28 23:34:55 np0005539482 cloud-init[920]: The key fingerprint is:
Nov 28 23:34:55 np0005539482 cloud-init[920]: SHA256:ALoIgfLCj6dBFoGXeKG95UMEr9gW+lngdAWyDGR51Hw root@np0005539482.novalocal
Nov 28 23:34:55 np0005539482 cloud-init[920]: The key's randomart image is:
Nov 28 23:34:55 np0005539482 cloud-init[920]: +---[RSA 3072]----+
Nov 28 23:34:55 np0005539482 cloud-init[920]: |*=B=*..          |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |*O+B = E         |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |=+@ = o          |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |o@.@   .         |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |*.X +   S        |
Nov 28 23:34:55 np0005539482 cloud-init[920]: | = = .           |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |  *              |
Nov 28 23:34:55 np0005539482 cloud-init[920]: | .               |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |                 |
Nov 28 23:34:55 np0005539482 cloud-init[920]: +----[SHA256]-----+
Nov 28 23:34:55 np0005539482 cloud-init[920]: Generating public/private ecdsa key pair.
Nov 28 23:34:55 np0005539482 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 28 23:34:55 np0005539482 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 28 23:34:55 np0005539482 cloud-init[920]: The key fingerprint is:
Nov 28 23:34:55 np0005539482 cloud-init[920]: SHA256:szS6/azr6zWLNGOc3qRPjtOudZU71M2G+GzB3LRQz4U root@np0005539482.novalocal
Nov 28 23:34:55 np0005539482 cloud-init[920]: The key's randomart image is:
Nov 28 23:34:55 np0005539482 cloud-init[920]: +---[ECDSA 256]---+
Nov 28 23:34:55 np0005539482 cloud-init[920]: |               o.|
Nov 28 23:34:55 np0005539482 cloud-init[920]: |              E.o|
Nov 28 23:34:55 np0005539482 cloud-init[920]: |             .  +|
Nov 28 23:34:55 np0005539482 cloud-init[920]: |             +.Bo|
Nov 28 23:34:55 np0005539482 cloud-init[920]: |        S   . O.*|
Nov 28 23:34:55 np0005539482 cloud-init[920]: |       + =   = + |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |      . O.* . *  |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |       *.#.+ . . |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |      .o&XO      |
Nov 28 23:34:55 np0005539482 cloud-init[920]: +----[SHA256]-----+
Nov 28 23:34:55 np0005539482 cloud-init[920]: Generating public/private ed25519 key pair.
Nov 28 23:34:55 np0005539482 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 28 23:34:55 np0005539482 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 28 23:34:55 np0005539482 cloud-init[920]: The key fingerprint is:
Nov 28 23:34:55 np0005539482 cloud-init[920]: SHA256:GAJwerN00ozBIJJYeYgzslfG5EvwLlOkgCp+SxnR760 root@np0005539482.novalocal
Nov 28 23:34:55 np0005539482 cloud-init[920]: The key's randomart image is:
Nov 28 23:34:55 np0005539482 cloud-init[920]: +--[ED25519 256]--+
Nov 28 23:34:55 np0005539482 cloud-init[920]: |OB*=+            |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |@++&=.           |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |++B+@ o          |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |+o.X o +         |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |o.+ = o S        |
Nov 28 23:34:55 np0005539482 cloud-init[920]: | . *   . .       |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |  o .   .        |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |   .   E         |
Nov 28 23:34:55 np0005539482 cloud-init[920]: |                 |
Nov 28 23:34:55 np0005539482 cloud-init[920]: +----[SHA256]-----+
Nov 28 23:34:55 np0005539482 systemd[1]: Finished Cloud-init: Network Stage.
Nov 28 23:34:55 np0005539482 systemd[1]: Reached target Cloud-config availability.
Nov 28 23:34:55 np0005539482 systemd[1]: Reached target Network is Online.
Nov 28 23:34:55 np0005539482 systemd[1]: Starting Cloud-init: Config Stage...
Nov 28 23:34:55 np0005539482 systemd[1]: Starting Crash recovery kernel arming...
Nov 28 23:34:55 np0005539482 systemd[1]: Starting Notify NFS peers of a restart...
Nov 28 23:34:56 np0005539482 systemd[1]: Starting System Logging Service...
Nov 28 23:34:56 np0005539482 systemd[1]: Starting OpenSSH server daemon...
Nov 28 23:34:56 np0005539482 sm-notify[1002]: Version 2.5.4 starting
Nov 28 23:34:56 np0005539482 systemd[1]: Starting Permit User Sessions...
Nov 28 23:34:56 np0005539482 systemd[1]: Started Notify NFS peers of a restart.
Nov 28 23:34:56 np0005539482 systemd[1]: Started OpenSSH server daemon.
Nov 28 23:34:56 np0005539482 systemd[1]: Finished Permit User Sessions.
Nov 28 23:34:56 np0005539482 rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Nov 28 23:34:56 np0005539482 rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 28 23:34:56 np0005539482 systemd[1]: Started Command Scheduler.
Nov 28 23:34:56 np0005539482 systemd[1]: Started Getty on tty1.
Nov 28 23:34:56 np0005539482 systemd[1]: Started Serial Getty on ttyS0.
Nov 28 23:34:56 np0005539482 systemd[1]: Reached target Login Prompts.
Nov 28 23:34:56 np0005539482 systemd[1]: Started System Logging Service.
Nov 28 23:34:56 np0005539482 systemd[1]: Reached target Multi-User System.
Nov 28 23:34:56 np0005539482 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 28 23:34:56 np0005539482 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 28 23:34:56 np0005539482 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 28 23:34:56 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 23:34:56 np0005539482 kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Nov 28 23:34:56 np0005539482 kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 28 23:34:56 np0005539482 cloud-init[1139]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 04:34:56 +0000. Up 8.91 seconds.
Nov 28 23:34:56 np0005539482 systemd[1]: Finished Cloud-init: Config Stage.
Nov 28 23:34:56 np0005539482 systemd[1]: Starting Cloud-init: Final Stage...
Nov 28 23:34:56 np0005539482 dracut[1284]: dracut-057-102.git20250818.el9
Nov 28 23:34:56 np0005539482 cloud-init[1300]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 04:34:56 +0000. Up 9.30 seconds.
Nov 28 23:34:56 np0005539482 cloud-init[1302]: #############################################################
Nov 28 23:34:56 np0005539482 cloud-init[1303]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 28 23:34:56 np0005539482 cloud-init[1305]: 256 SHA256:szS6/azr6zWLNGOc3qRPjtOudZU71M2G+GzB3LRQz4U root@np0005539482.novalocal (ECDSA)
Nov 28 23:34:56 np0005539482 cloud-init[1307]: 256 SHA256:GAJwerN00ozBIJJYeYgzslfG5EvwLlOkgCp+SxnR760 root@np0005539482.novalocal (ED25519)
Nov 28 23:34:56 np0005539482 cloud-init[1311]: 3072 SHA256:ALoIgfLCj6dBFoGXeKG95UMEr9gW+lngdAWyDGR51Hw root@np0005539482.novalocal (RSA)
Nov 28 23:34:56 np0005539482 cloud-init[1312]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 28 23:34:56 np0005539482 cloud-init[1313]: #############################################################
Nov 28 23:34:56 np0005539482 dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 28 23:34:56 np0005539482 cloud-init[1300]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 04:34:56 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.51 seconds
Nov 28 23:34:56 np0005539482 systemd[1]: Finished Cloud-init: Final Stage.
Nov 28 23:34:56 np0005539482 systemd[1]: Reached target Cloud-init target.
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: memstrack is not available
Nov 28 23:34:57 np0005539482 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 28 23:34:57 np0005539482 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 28 23:34:58 np0005539482 dracut[1286]: memstrack is not available
Nov 28 23:34:58 np0005539482 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 28 23:34:58 np0005539482 dracut[1286]: *** Including module: systemd ***
Nov 28 23:34:58 np0005539482 dracut[1286]: *** Including module: fips ***
Nov 28 23:34:58 np0005539482 chronyd[785]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Nov 28 23:34:58 np0005539482 chronyd[785]: System clock TAI offset set to 37 seconds
Nov 28 23:34:58 np0005539482 dracut[1286]: *** Including module: systemd-initrd ***
Nov 28 23:34:58 np0005539482 dracut[1286]: *** Including module: i18n ***
Nov 28 23:34:58 np0005539482 dracut[1286]: *** Including module: drm ***
Nov 28 23:34:59 np0005539482 dracut[1286]: *** Including module: prefixdevname ***
Nov 28 23:34:59 np0005539482 dracut[1286]: *** Including module: kernel-modules ***
Nov 28 23:34:59 np0005539482 kernel: block vda: the capability attribute has been deprecated.
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: kernel-modules-extra ***
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: qemu ***
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: fstab-sys ***
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: rootfs-block ***
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: terminfo ***
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: udev-rules ***
Nov 28 23:35:00 np0005539482 dracut[1286]: Skipping udev rule: 91-permissions.rules
Nov 28 23:35:00 np0005539482 dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: virtiofs ***
Nov 28 23:35:00 np0005539482 dracut[1286]: *** Including module: dracut-systemd ***
Nov 28 23:35:01 np0005539482 dracut[1286]: *** Including module: usrmount ***
Nov 28 23:35:01 np0005539482 dracut[1286]: *** Including module: base ***
Nov 28 23:35:01 np0005539482 dracut[1286]: *** Including module: fs-lib ***
Nov 28 23:35:01 np0005539482 dracut[1286]: *** Including module: kdumpbase ***
Nov 28 23:35:01 np0005539482 dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 28 23:35:01 np0005539482 dracut[1286]:  microcode_ctl module: mangling fw_dir
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel" is ignored
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 28 23:35:01 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 28 23:35:02 np0005539482 dracut[1286]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 28 23:35:02 np0005539482 dracut[1286]: *** Including module: openssl ***
Nov 28 23:35:02 np0005539482 dracut[1286]: *** Including module: shutdown ***
Nov 28 23:35:02 np0005539482 dracut[1286]: *** Including module: squash ***
Nov 28 23:35:02 np0005539482 dracut[1286]: *** Including modules done ***
Nov 28 23:35:02 np0005539482 dracut[1286]: *** Installing kernel module dependencies ***
Nov 28 23:35:03 np0005539482 dracut[1286]: *** Installing kernel module dependencies done ***
Nov 28 23:35:03 np0005539482 dracut[1286]: *** Resolving executable dependencies ***
Nov 28 23:35:03 np0005539482 irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 28 23:35:03 np0005539482 irqbalance[782]: IRQ 25 affinity is now unmanaged
Nov 28 23:35:03 np0005539482 irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 28 23:35:03 np0005539482 irqbalance[782]: IRQ 31 affinity is now unmanaged
Nov 28 23:35:03 np0005539482 irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 28 23:35:03 np0005539482 irqbalance[782]: IRQ 28 affinity is now unmanaged
Nov 28 23:35:03 np0005539482 irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 28 23:35:03 np0005539482 irqbalance[782]: IRQ 32 affinity is now unmanaged
Nov 28 23:35:03 np0005539482 irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 28 23:35:03 np0005539482 irqbalance[782]: IRQ 30 affinity is now unmanaged
Nov 28 23:35:03 np0005539482 irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 28 23:35:03 np0005539482 irqbalance[782]: IRQ 29 affinity is now unmanaged
Nov 28 23:35:03 np0005539482 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 23:35:04 np0005539482 dracut[1286]: *** Resolving executable dependencies done ***
Nov 28 23:35:04 np0005539482 dracut[1286]: *** Generating early-microcode cpio image ***
Nov 28 23:35:04 np0005539482 dracut[1286]: *** Store current command line parameters ***
Nov 28 23:35:04 np0005539482 dracut[1286]: Stored kernel commandline:
Nov 28 23:35:04 np0005539482 dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Nov 28 23:35:05 np0005539482 dracut[1286]: *** Install squash loader ***
Nov 28 23:35:05 np0005539482 dracut[1286]: *** Squashing the files inside the initramfs ***
Nov 28 23:35:07 np0005539482 dracut[1286]: *** Squashing the files inside the initramfs done ***
Nov 28 23:35:07 np0005539482 dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 28 23:35:07 np0005539482 dracut[1286]: *** Hardlinking files ***
Nov 28 23:35:07 np0005539482 dracut[1286]: *** Hardlinking files done ***
Nov 28 23:35:07 np0005539482 dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 28 23:35:07 np0005539482 kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Nov 28 23:35:07 np0005539482 kdumpctl[1012]: kdump: Starting kdump: [OK]
Nov 28 23:35:07 np0005539482 systemd[1]: Finished Crash recovery kernel arming.
Nov 28 23:35:07 np0005539482 systemd[1]: Startup finished in 1.506s (kernel) + 2.388s (initrd) + 16.697s (userspace) = 20.593s.
Nov 28 23:35:09 np0005539482 systemd[1]: Created slice User Slice of UID 1000.
Nov 28 23:35:09 np0005539482 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 28 23:35:09 np0005539482 systemd-logind[793]: New session 1 of user zuul.
Nov 28 23:35:09 np0005539482 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 28 23:35:09 np0005539482 systemd[1]: Starting User Manager for UID 1000...
Nov 28 23:35:10 np0005539482 systemd[4298]: Queued start job for default target Main User Target.
Nov 28 23:35:10 np0005539482 systemd[4298]: Created slice User Application Slice.
Nov 28 23:35:10 np0005539482 systemd[4298]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 28 23:35:10 np0005539482 systemd[4298]: Started Daily Cleanup of User's Temporary Directories.
Nov 28 23:35:10 np0005539482 systemd[4298]: Reached target Paths.
Nov 28 23:35:10 np0005539482 systemd[4298]: Reached target Timers.
Nov 28 23:35:10 np0005539482 systemd[4298]: Starting D-Bus User Message Bus Socket...
Nov 28 23:35:10 np0005539482 systemd[4298]: Starting Create User's Volatile Files and Directories...
Nov 28 23:35:10 np0005539482 systemd[4298]: Listening on D-Bus User Message Bus Socket.
Nov 28 23:35:10 np0005539482 systemd[4298]: Reached target Sockets.
Nov 28 23:35:10 np0005539482 systemd[4298]: Finished Create User's Volatile Files and Directories.
Nov 28 23:35:10 np0005539482 systemd[4298]: Reached target Basic System.
Nov 28 23:35:10 np0005539482 systemd[4298]: Reached target Main User Target.
Nov 28 23:35:10 np0005539482 systemd[4298]: Startup finished in 127ms.
Nov 28 23:35:10 np0005539482 systemd[1]: Started User Manager for UID 1000.
Nov 28 23:35:10 np0005539482 systemd[1]: Started Session 1 of User zuul.
Nov 28 23:35:10 np0005539482 python3[4380]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 23:35:13 np0005539482 python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 23:35:19 np0005539482 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 23:35:19 np0005539482 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 28 23:35:21 np0005539482 python3[4534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA8z7osgMfJ2V68AJKFgst/U0KXcc4VJrmzfWSwLCAOfFr1nGizEz1bHmhD5AP5T+NQF48QPTWJekwRWtTol+JQ7PPjXRnDneG8Q/rPEXMV2aBfw+3PdEYOOVD6H6t3kKlftuipUslUTns+Kva4yhOhX5u0owj67mG7GhRjdDLVIjB4JT88BhrqcF4m+AhhAAafKmQDudMb4CcmFRv0Ibb5iSOiJDB0jz7EoZa+1AeLksNBfhUsPuIc0uQ1aWze7thVlS8tvR1hTZKkPl72zSegthkyER8OF8wDl9qNZuzw5fYSCpr18IOUzTnbmv4OJ5N/fQwqgMNsgk+87085SfBPwAVUYlpmbK4CCxoqyMKRb2ShJEW2WVJd0ltBSOt1mizhuV9wd7pwv9DAxGfXKMuPyoMjiXGnKPqW1VnPiNHqvEacoOu/9XDRotLT29O4JNnTQpvIVEhEXytI5BzdLE9t3NXwle/rUM4j91OvyZXNLCuWOpC5JfBokuv5++nJNs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:22 np0005539482 python3[4558]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:22 np0005539482 python3[4657]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:35:23 np0005539482 python3[4728]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764390922.424452-207-215796478777749/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c10d28bb2bab4e67bcc34b3958ef9bbe_id_rsa follow=False checksum=22cfbedb31c632b8064e31452f96b846a8515459 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:23 np0005539482 python3[4851]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:35:23 np0005539482 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 28 23:35:24 np0005539482 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764390923.3509195-240-189515479250417/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c10d28bb2bab4e67bcc34b3958ef9bbe_id_rsa.pub follow=False checksum=c333e9f91b79a81adf6caca410967b32009e7daa backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:25 np0005539482 python3[4972]: ansible-ping Invoked with data=pong
Nov 28 23:35:26 np0005539482 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 23:35:27 np0005539482 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 28 23:35:28 np0005539482 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:29 np0005539482 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:29 np0005539482 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:29 np0005539482 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:30 np0005539482 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:30 np0005539482 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:31 np0005539482 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:32 np0005539482 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:35:32 np0005539482 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764390932.010001-21-198941903856383/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:33 np0005539482 irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 28 23:35:33 np0005539482 irqbalance[782]: IRQ 26 affinity is now unmanaged
Nov 28 23:35:33 np0005539482 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:33 np0005539482 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:34 np0005539482 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:34 np0005539482 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:34 np0005539482 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:34 np0005539482 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:35 np0005539482 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:35 np0005539482 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:35 np0005539482 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:35 np0005539482 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:36 np0005539482 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:36 np0005539482 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:36 np0005539482 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:36 np0005539482 python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:37 np0005539482 python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:37 np0005539482 python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:37 np0005539482 python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:37 np0005539482 python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:38 np0005539482 python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:38 np0005539482 python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:38 np0005539482 python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:39 np0005539482 python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:39 np0005539482 python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:39 np0005539482 python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:39 np0005539482 python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:40 np0005539482 python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:35:43 np0005539482 python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 28 23:35:43 np0005539482 systemd[1]: Starting Time & Date Service...
Nov 28 23:35:43 np0005539482 systemd[1]: Started Time & Date Service.
Nov 28 23:35:43 np0005539482 systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Nov 28 23:35:43 np0005539482 python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:44 np0005539482 python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:35:44 np0005539482 python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764390943.9007714-153-83171242602396/source _original_basename=tmpbyockkw4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:45 np0005539482 python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:35:45 np0005539482 python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764390944.790436-183-149303766224578/source _original_basename=tmp4irdtlv_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:46 np0005539482 python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:35:46 np0005539482 python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764390945.8148494-231-52705442630964/source _original_basename=tmp89kk4x56 follow=False checksum=673d2f3d6c56c6a6f0fd71b2f865eaf754405451 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:46 np0005539482 python3[6635]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:35:47 np0005539482 python3[6661]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:35:47 np0005539482 python3[6741]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:35:47 np0005539482 python3[6814]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764390947.3701653-273-261965661699807/source _original_basename=tmpzahxeyn2 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:48 np0005539482 python3[6865]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-5c3c-8300-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:35:49 np0005539482 python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-5c3c-8300-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 28 23:35:50 np0005539482 python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:35:53 np0005539482 irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 28 23:35:53 np0005539482 irqbalance[782]: IRQ 27 affinity is now unmanaged
Nov 28 23:36:09 np0005539482 python3[6950]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:36:13 np0005539482 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 28 23:36:42 np0005539482 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 28 23:36:42 np0005539482 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0430] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 28 23:36:43 np0005539482 systemd-udevd[6955]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0607] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0633] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0638] device (eth1): carrier: link connected
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0640] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0647] policy: auto-activating connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6)
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0651] device (eth1): Activation: starting connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6)
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0653] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0656] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0661] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 23:36:43 np0005539482 NetworkManager[856]: <info>  [1764391003.0666] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 28 23:36:43 np0005539482 python3[6982]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-7f1d-78cb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:36:53 np0005539482 python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:36:54 np0005539482 python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764391013.3792095-102-28705876702986/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=313ee6c5e98aa318ee46868b9de42aec2db266a7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:36:54 np0005539482 python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 23:36:54 np0005539482 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 28 23:36:54 np0005539482 systemd[1]: Stopped Network Manager Wait Online.
Nov 28 23:36:54 np0005539482 systemd[1]: Stopping Network Manager Wait Online...
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9098] caught SIGTERM, shutting down normally.
Nov 28 23:36:54 np0005539482 systemd[1]: Stopping Network Manager...
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9108] dhcp4 (eth0): canceled DHCP transaction
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9108] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9109] dhcp4 (eth0): state changed no lease
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9113] manager: NetworkManager state is now CONNECTING
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9213] dhcp4 (eth1): canceled DHCP transaction
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9214] dhcp4 (eth1): state changed no lease
Nov 28 23:36:54 np0005539482 NetworkManager[856]: <info>  [1764391014.9281] exiting (success)
Nov 28 23:36:54 np0005539482 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 23:36:54 np0005539482 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 23:36:54 np0005539482 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 28 23:36:54 np0005539482 systemd[1]: Stopped Network Manager.
Nov 28 23:36:54 np0005539482 systemd[1]: Starting Network Manager...
Nov 28 23:36:54 np0005539482 NetworkManager[7200]: <info>  [1764391014.9929] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:919d61e4-148b-4df4-a773-feb4933c1c42)
Nov 28 23:36:54 np0005539482 NetworkManager[7200]: <info>  [1764391014.9931] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 28 23:36:54 np0005539482 NetworkManager[7200]: <info>  [1764391014.9996] manager[0x5562679f0070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 28 23:36:55 np0005539482 systemd[1]: Starting Hostname Service...
Nov 28 23:36:55 np0005539482 systemd[1]: Started Hostname Service.
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1165] hostname: hostname: using hostnamed
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1166] hostname: static hostname changed from (none) to "np0005539482.novalocal"
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1173] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1179] manager[0x5562679f0070]: rfkill: Wi-Fi hardware radio set enabled
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1179] manager[0x5562679f0070]: rfkill: WWAN hardware radio set enabled
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1223] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1223] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1224] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1225] manager: Networking is enabled by state file
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1230] settings: Loaded settings plugin: keyfile (internal)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1237] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1282] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1299] dhcp: init: Using DHCP client 'internal'
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1304] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1313] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1326] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1340] device (lo): Activation: starting connection 'lo' (aeac58a6-e034-4337-948c-d58870c36302)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1352] device (eth0): carrier: link connected
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1360] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1371] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1373] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1386] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1399] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1410] device (eth1): carrier: link connected
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1417] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1427] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6) (indicated)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1427] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1439] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1451] device (eth1): Activation: starting connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6)
Nov 28 23:36:55 np0005539482 systemd[1]: Started Network Manager.
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1462] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1470] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1477] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1481] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1488] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1494] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1499] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1505] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1511] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1523] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1533] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1547] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1555] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1580] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1589] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1599] device (lo): Activation: successful, device activated.
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1611] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1636] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 28 23:36:55 np0005539482 systemd[1]: Starting Network Manager Wait Online...
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1727] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1764] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1767] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1772] manager: NetworkManager state is now CONNECTED_SITE
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1777] device (eth0): Activation: successful, device activated.
Nov 28 23:36:55 np0005539482 NetworkManager[7200]: <info>  [1764391015.1784] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 28 23:36:55 np0005539482 python3[7273]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-7f1d-78cb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:37:05 np0005539482 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 23:37:25 np0005539482 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3469] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 28 23:37:40 np0005539482 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 23:37:40 np0005539482 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3768] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3771] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3777] device (eth1): Activation: successful, device activated.
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3781] manager: startup complete
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3783] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <warn>  [1764391060.3786] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3793] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 28 23:37:40 np0005539482 systemd[1]: Finished Network Manager Wait Online.
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3933] dhcp4 (eth1): canceled DHCP transaction
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3934] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3934] dhcp4 (eth1): state changed no lease
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3949] policy: auto-activating connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3953] device (eth1): Activation: starting connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3954] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3956] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3962] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.3969] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.4001] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.4003] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 23:37:40 np0005539482 NetworkManager[7200]: <info>  [1764391060.4008] device (eth1): Activation: successful, device activated.
Nov 28 23:37:50 np0005539482 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 23:37:52 np0005539482 python3[7382]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:37:53 np0005539482 python3[7455]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764391072.6651232-267-220928140765279/source _original_basename=tmp6z37wb1h follow=False checksum=2dfbd593b187155bf8a3fd475333efd17513319b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:38:04 np0005539482 systemd[4298]: Starting Mark boot as successful...
Nov 28 23:38:04 np0005539482 systemd[4298]: Finished Mark boot as successful.
Nov 28 23:38:53 np0005539482 systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Nov 28 23:41:04 np0005539482 systemd[4298]: Created slice User Background Tasks Slice.
Nov 28 23:41:04 np0005539482 systemd[4298]: Starting Cleanup of User's Temporary Files and Directories...
Nov 28 23:41:04 np0005539482 systemd[4298]: Finished Cleanup of User's Temporary Files and Directories.
Nov 28 23:42:41 np0005539482 systemd-logind[793]: New session 3 of user zuul.
Nov 28 23:42:41 np0005539482 systemd[1]: Started Session 3 of User zuul.
Nov 28 23:42:41 np0005539482 python3[7540]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-9482-9f77-000000001cc4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:42:41 np0005539482 python3[7569]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:42:42 np0005539482 python3[7595]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:42:42 np0005539482 python3[7621]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:42:42 np0005539482 python3[7647]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:42:43 np0005539482 python3[7673]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:42:43 np0005539482 python3[7751]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:42:44 np0005539482 python3[7824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764391363.531705-468-98046396210304/source _original_basename=tmpb0vb3svx follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:42:45 np0005539482 python3[7874]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 23:42:45 np0005539482 systemd[1]: Reloading.
Nov 28 23:42:45 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 23:42:46 np0005539482 python3[7930]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 28 23:42:47 np0005539482 python3[7956]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:42:47 np0005539482 python3[7984]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:42:47 np0005539482 python3[8012]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:42:47 np0005539482 python3[8040]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:42:48 np0005539482 python3[8067]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-9482-9f77-000000001ccb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:42:48 np0005539482 python3[8097]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 28 23:42:50 np0005539482 systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Nov 28 23:42:50 np0005539482 systemd[1]: session-3.scope: Deactivated successfully.
Nov 28 23:42:50 np0005539482 systemd[1]: session-3.scope: Consumed 3.878s CPU time.
Nov 28 23:42:50 np0005539482 systemd-logind[793]: Removed session 3.
Nov 28 23:42:52 np0005539482 systemd-logind[793]: New session 4 of user zuul.
Nov 28 23:42:52 np0005539482 systemd[1]: Started Session 4 of User zuul.
Nov 28 23:42:52 np0005539482 python3[8133]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 28 23:43:06 np0005539482 kernel: SELinux:  Converting 385 SID table entries...
Nov 28 23:43:06 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 23:43:06 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 28 23:43:06 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 23:43:06 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 28 23:43:06 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 23:43:06 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 23:43:06 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 23:43:15 np0005539482 kernel: SELinux:  Converting 385 SID table entries...
Nov 28 23:43:15 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 23:43:15 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 28 23:43:15 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 23:43:15 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 28 23:43:15 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 23:43:15 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 23:43:15 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 23:43:24 np0005539482 kernel: SELinux:  Converting 385 SID table entries...
Nov 28 23:43:24 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 23:43:24 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 28 23:43:24 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 23:43:24 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 28 23:43:24 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 23:43:24 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 23:43:24 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 23:43:25 np0005539482 setsebool[8202]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 28 23:43:25 np0005539482 setsebool[8202]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 28 23:43:37 np0005539482 kernel: SELinux:  Converting 388 SID table entries...
Nov 28 23:43:37 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 23:43:37 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 28 23:43:37 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 23:43:37 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 28 23:43:37 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 23:43:37 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 23:43:37 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 23:43:56 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 28 23:43:56 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 23:43:56 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 28 23:43:56 np0005539482 systemd[1]: Reloading.
Nov 28 23:43:56 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 23:43:56 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 23:44:02 np0005539482 python3[13858]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-b818-508a-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:44:03 np0005539482 kernel: evm: overlay not supported
Nov 28 23:44:03 np0005539482 systemd[4298]: Starting D-Bus User Message Bus...
Nov 28 23:44:03 np0005539482 dbus-broker-launch[14092]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 28 23:44:03 np0005539482 dbus-broker-launch[14092]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 28 23:44:03 np0005539482 systemd[4298]: Started D-Bus User Message Bus.
Nov 28 23:44:03 np0005539482 dbus-broker-lau[14092]: Ready
Nov 28 23:44:03 np0005539482 systemd[4298]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 28 23:44:03 np0005539482 systemd[4298]: Created slice Slice /user.
Nov 28 23:44:03 np0005539482 systemd[4298]: podman-14073.scope: unit configures an IP firewall, but not running as root.
Nov 28 23:44:03 np0005539482 systemd[4298]: (This warning is only shown for the first unit using IP firewalling.)
Nov 28 23:44:03 np0005539482 systemd[4298]: Started podman-14073.scope.
Nov 28 23:44:03 np0005539482 systemd[4298]: Started podman-pause-0438675b.scope.
Nov 28 23:44:04 np0005539482 python3[14447]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.30:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.30:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:44:04 np0005539482 python3[14447]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 28 23:44:04 np0005539482 systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Nov 28 23:44:04 np0005539482 systemd[1]: session-4.scope: Deactivated successfully.
Nov 28 23:44:04 np0005539482 systemd[1]: session-4.scope: Consumed 58.854s CPU time.
Nov 28 23:44:04 np0005539482 systemd-logind[793]: Removed session 4.
Nov 28 23:44:27 np0005539482 systemd-logind[793]: New session 5 of user zuul.
Nov 28 23:44:27 np0005539482 systemd[1]: Started Session 5 of User zuul.
Nov 28 23:44:27 np0005539482 python3[23588]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAZtOFYhQMEa5nYlDS3yTR0mwPfNdibYk5CkrJGGicpFqhJ3ZDd/9qZuUQiiYA5rEM9cOLorGiDfXnpK64Jn/o= zuul@np0005539481.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:44:28 np0005539482 python3[23791]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAZtOFYhQMEa5nYlDS3yTR0mwPfNdibYk5CkrJGGicpFqhJ3ZDd/9qZuUQiiYA5rEM9cOLorGiDfXnpK64Jn/o= zuul@np0005539481.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:44:28 np0005539482 python3[24139]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539482.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 28 23:44:29 np0005539482 python3[24370]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAZtOFYhQMEa5nYlDS3yTR0mwPfNdibYk5CkrJGGicpFqhJ3ZDd/9qZuUQiiYA5rEM9cOLorGiDfXnpK64Jn/o= zuul@np0005539481.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 23:44:29 np0005539482 python3[24650]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:44:30 np0005539482 python3[24919]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764391469.5048835-135-234232653152013/source _original_basename=tmpsqondq3e follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:44:30 np0005539482 python3[25251]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 28 23:44:31 np0005539482 systemd[1]: Starting Hostname Service...
Nov 28 23:44:31 np0005539482 systemd[1]: Started Hostname Service.
Nov 28 23:44:31 np0005539482 systemd-hostnamed[25361]: Changed pretty hostname to 'compute-0'
Nov 28 23:44:31 np0005539482 systemd-hostnamed[25361]: Hostname set to <compute-0> (static)
Nov 28 23:44:31 np0005539482 NetworkManager[7200]: <info>  [1764391471.0943] hostname: static hostname changed from "np0005539482.novalocal" to "compute-0"
Nov 28 23:44:31 np0005539482 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 23:44:31 np0005539482 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 23:44:31 np0005539482 systemd[1]: session-5.scope: Deactivated successfully.
Nov 28 23:44:31 np0005539482 systemd[1]: session-5.scope: Consumed 2.076s CPU time.
Nov 28 23:44:31 np0005539482 systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Nov 28 23:44:31 np0005539482 systemd-logind[793]: Removed session 5.
Nov 28 23:44:41 np0005539482 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 23:44:41 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 23:44:41 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 28 23:44:41 np0005539482 systemd[1]: man-db-cache-update.service: Consumed 55.108s CPU time.
Nov 28 23:44:41 np0005539482 systemd[1]: run-r6d2abc60798b4867af5b1b4e8f1b42bc.service: Deactivated successfully.
Nov 28 23:45:01 np0005539482 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 28 23:48:03 np0005539482 systemd-logind[793]: New session 6 of user zuul.
Nov 28 23:48:03 np0005539482 systemd[1]: Started Session 6 of User zuul.
Nov 28 23:48:04 np0005539482 python3[30050]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 23:48:05 np0005539482 python3[30166]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:48:06 np0005539482 python3[30239]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:48:06 np0005539482 python3[30265]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:48:06 np0005539482 python3[30338]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:48:07 np0005539482 python3[30364]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:48:07 np0005539482 python3[30437]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:48:07 np0005539482 python3[30463]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:48:07 np0005539482 python3[30536]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:48:08 np0005539482 python3[30562]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:48:08 np0005539482 python3[30635]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:48:08 np0005539482 python3[30661]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:48:09 np0005539482 python3[30734]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:48:09 np0005539482 python3[30762]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 23:48:09 np0005539482 python3[30835]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 23:48:20 np0005539482 python3[30893]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:49:49 np0005539482 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 28 23:49:49 np0005539482 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 28 23:49:49 np0005539482 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 28 23:49:49 np0005539482 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 28 23:53:20 np0005539482 systemd[1]: session-6.scope: Deactivated successfully.
Nov 28 23:53:20 np0005539482 systemd[1]: session-6.scope: Consumed 4.870s CPU time.
Nov 28 23:53:20 np0005539482 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Nov 28 23:53:20 np0005539482 systemd-logind[793]: Removed session 6.
Nov 28 23:59:30 np0005539482 systemd-logind[793]: New session 7 of user zuul.
Nov 28 23:59:30 np0005539482 systemd[1]: Started Session 7 of User zuul.
Nov 28 23:59:31 np0005539482 python3.9[31111]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 23:59:34 np0005539482 python3.9[31295]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 23:59:42 np0005539482 systemd[1]: session-7.scope: Deactivated successfully.
Nov 28 23:59:42 np0005539482 systemd[1]: session-7.scope: Consumed 8.106s CPU time.
Nov 28 23:59:42 np0005539482 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Nov 28 23:59:42 np0005539482 systemd-logind[793]: Removed session 7.
Nov 28 23:59:57 np0005539482 systemd-logind[793]: New session 8 of user zuul.
Nov 28 23:59:57 np0005539482 systemd[1]: Started Session 8 of User zuul.
Nov 28 23:59:58 np0005539482 python3.9[31505]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 28 23:59:59 np0005539482 python3.9[31679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:00:00 np0005539482 python3.9[31832]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:00:01 np0005539482 python3.9[31985]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:00:02 np0005539482 python3.9[32137]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:00:03 np0005539482 python3.9[32289]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:00:04 np0005539482 python3.9[32412]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392402.866403-73-66379025660430/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:00:04 np0005539482 python3.9[32564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:00:05 np0005539482 python3.9[32720]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:00:06 np0005539482 python3.9[32872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:00:07 np0005539482 python3.9[33022]: ansible-ansible.builtin.service_facts Invoked
Nov 29 00:00:12 np0005539482 python3.9[33275]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:00:12 np0005539482 python3.9[33425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:00:14 np0005539482 python3.9[33579]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:00:15 np0005539482 python3.9[33737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:00:16 np0005539482 python3.9[33821]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:00:58 np0005539482 systemd[1]: Reloading.
Nov 29 00:00:58 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:00:59 np0005539482 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 00:00:59 np0005539482 systemd[1]: Reloading.
Nov 29 00:00:59 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:00:59 np0005539482 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 00:00:59 np0005539482 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 00:00:59 np0005539482 systemd[1]: Reloading.
Nov 29 00:00:59 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:00:59 np0005539482 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 00:01:00 np0005539482 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 00:01:00 np0005539482 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 00:01:00 np0005539482 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 00:02:01 np0005539482 kernel: SELinux:  Converting 2717 SID table entries...
Nov 29 00:02:01 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:02:01 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:02:01 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:02:01 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:02:01 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:02:01 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:02:01 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:02:02 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 00:02:02 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:02:02 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:02:02 np0005539482 systemd[1]: Reloading.
Nov 29 00:02:02 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:02:02 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:02:03 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:02:03 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:02:03 np0005539482 systemd[1]: man-db-cache-update.service: Consumed 1.032s CPU time.
Nov 29 00:02:03 np0005539482 systemd[1]: run-r9dd143f0c8464a9e84cfa4542bc1d09a.service: Deactivated successfully.
Nov 29 00:02:03 np0005539482 python3.9[35339]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:02:05 np0005539482 python3.9[35620]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 00:02:06 np0005539482 python3.9[35772]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 00:02:09 np0005539482 python3.9[35925]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:02:10 np0005539482 python3.9[36078]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 00:02:11 np0005539482 python3.9[36230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:02:12 np0005539482 python3.9[36382]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:02:15 np0005539482 python3.9[36506]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392531.649347-236-86836183709546/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:02:16 np0005539482 python3.9[36658]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:02:17 np0005539482 python3.9[36810]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:02:18 np0005539482 python3.9[36963]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:02:19 np0005539482 python3.9[37115]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 00:02:19 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:02:19 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:02:20 np0005539482 python3.9[37269]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 00:02:21 np0005539482 python3.9[37427]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 00:02:22 np0005539482 python3.9[37587]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 00:02:23 np0005539482 python3.9[37740]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 00:02:23 np0005539482 python3.9[37898]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 00:02:24 np0005539482 python3.9[38050]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:02:26 np0005539482 python3.9[38203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:02:27 np0005539482 python3.9[38355]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:02:27 np0005539482 python3.9[38478]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392546.9201367-355-57568386300701/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:02:29 np0005539482 python3.9[38630]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:02:29 np0005539482 systemd[1]: Starting Load Kernel Modules...
Nov 29 00:02:29 np0005539482 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 00:02:29 np0005539482 kernel: Bridge firewalling registered
Nov 29 00:02:29 np0005539482 systemd-modules-load[38634]: Inserted module 'br_netfilter'
Nov 29 00:02:29 np0005539482 systemd[1]: Finished Load Kernel Modules.
Nov 29 00:02:30 np0005539482 python3.9[38789]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:02:30 np0005539482 python3.9[38912]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392549.5754542-378-229485903586267/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:02:31 np0005539482 python3.9[39064]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:02:34 np0005539482 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 00:02:34 np0005539482 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 00:02:35 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:02:35 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:02:35 np0005539482 systemd[1]: Reloading.
Nov 29 00:02:35 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:02:35 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:02:36 np0005539482 python3.9[40423]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:02:37 np0005539482 python3.9[41419]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 00:02:38 np0005539482 python3.9[42265]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:02:39 np0005539482 python3.9[43240]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:02:39 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:02:39 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:02:39 np0005539482 systemd[1]: man-db-cache-update.service: Consumed 4.984s CPU time.
Nov 29 00:02:39 np0005539482 systemd[1]: run-rbcb73c96f7fb44f195e91c23fd0ba4ed.service: Deactivated successfully.
Nov 29 00:02:39 np0005539482 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 00:02:39 np0005539482 systemd[1]: Starting Authorization Manager...
Nov 29 00:02:39 np0005539482 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 00:02:39 np0005539482 polkitd[43510]: Started polkitd version 0.117
Nov 29 00:02:39 np0005539482 systemd[1]: Started Authorization Manager.
Nov 29 00:02:40 np0005539482 python3.9[43680]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:02:40 np0005539482 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 00:02:40 np0005539482 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 00:02:40 np0005539482 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 00:02:40 np0005539482 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 00:02:41 np0005539482 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 00:02:41 np0005539482 python3.9[43842]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 00:02:44 np0005539482 python3.9[43994]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:02:44 np0005539482 systemd[1]: Reloading.
Nov 29 00:02:44 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:02:44 np0005539482 systemd[1]: Starting dnf makecache...
Nov 29 00:02:44 np0005539482 dnf[44032]: Failed determining last makecache time.
Nov 29 00:02:44 np0005539482 dnf[44032]: delorean-openstack-barbican-42b4c41831408a8e323 114 kB/s | 3.0 kB     00:00
Nov 29 00:02:44 np0005539482 dnf[44032]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 152 kB/s | 3.0 kB     00:00
Nov 29 00:02:44 np0005539482 dnf[44032]: delorean-openstack-cinder-1c00d6490d88e436f26ef 157 kB/s | 3.0 kB     00:00
Nov 29 00:02:44 np0005539482 dnf[44032]: delorean-python-stevedore-c4acc5639fd2329372142 157 kB/s | 3.0 kB     00:00
Nov 29 00:02:44 np0005539482 dnf[44032]: delorean-python-cloudkitty-tests-tempest-2c80f8 160 kB/s | 3.0 kB     00:00
Nov 29 00:02:44 np0005539482 dnf[44032]: delorean-os-net-config-9758ab42364673d01bc5014e 153 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 134 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-python-designate-tests-tempest-347fdbc 146 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-glance-1fd12c29b339f30fe823e 148 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 154 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-manila-3c01b7181572c95dac462 169 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-python-whitebox-neutron-tests-tempest- 170 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-octavia-ba397f07a7331190208c 167 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-watcher-c014f81a8647287f6dcc 167 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-python-tcib-1124124ec06aadbac34f0d340b 157 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 161 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-swift-dc98a8463506ac520c469a 159 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-python-tempestconf-8515371b7cceebd4282 133 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: delorean-openstack-heat-ui-013accbfd179753bc3f0 158 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: CentOS Stream 9 - BaseOS                         78 kB/s | 7.3 kB     00:00
Nov 29 00:02:45 np0005539482 python3.9[44194]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:02:45 np0005539482 systemd[1]: Reloading.
Nov 29 00:02:45 np0005539482 dnf[44032]: CentOS Stream 9 - AppStream                      77 kB/s | 7.4 kB     00:00
Nov 29 00:02:45 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:02:45 np0005539482 dnf[44032]: CentOS Stream 9 - CRB                            83 kB/s | 7.2 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: CentOS Stream 9 - Extras packages                78 kB/s | 8.3 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: dlrn-antelope-testing                           181 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: dlrn-antelope-build-deps                        161 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: centos9-rabbitmq                                 95 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: centos9-storage                                 128 kB/s | 3.0 kB     00:00
Nov 29 00:02:45 np0005539482 dnf[44032]: centos9-opstools                                132 kB/s | 3.0 kB     00:00
Nov 29 00:02:46 np0005539482 dnf[44032]: NFV SIG OpenvSwitch                             137 kB/s | 3.0 kB     00:00
Nov 29 00:02:46 np0005539482 dnf[44032]: repo-setup-centos-appstream                     109 kB/s | 4.4 kB     00:00
Nov 29 00:02:46 np0005539482 dnf[44032]: repo-setup-centos-baseos                        162 kB/s | 3.9 kB     00:00
Nov 29 00:02:46 np0005539482 dnf[44032]: repo-setup-centos-highavailability               98 kB/s | 3.9 kB     00:00
Nov 29 00:02:46 np0005539482 python3.9[44409]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:02:46 np0005539482 dnf[44032]: repo-setup-centos-powertools                    177 kB/s | 4.3 kB     00:00
Nov 29 00:02:46 np0005539482 dnf[44032]: Extra Packages for Enterprise Linux 9 - x86_64  212 kB/s |  33 kB     00:00
Nov 29 00:02:47 np0005539482 python3.9[44569]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:02:47 np0005539482 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 00:02:47 np0005539482 dnf[44032]: Metadata cache created.
Nov 29 00:02:47 np0005539482 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 00:02:47 np0005539482 systemd[1]: Finished dnf makecache.
Nov 29 00:02:47 np0005539482 systemd[1]: dnf-makecache.service: Consumed 1.717s CPU time.
Nov 29 00:02:47 np0005539482 python3.9[44723]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:02:49 np0005539482 python3.9[44885]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:02:50 np0005539482 python3.9[45038]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:02:50 np0005539482 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 00:02:50 np0005539482 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 00:02:50 np0005539482 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 00:02:50 np0005539482 systemd[1]: Starting Apply Kernel Variables...
Nov 29 00:02:50 np0005539482 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 00:02:50 np0005539482 systemd[1]: Finished Apply Kernel Variables.
Nov 29 00:02:51 np0005539482 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 00:02:51 np0005539482 systemd[1]: session-8.scope: Consumed 2min 9.959s CPU time.
Nov 29 00:02:51 np0005539482 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Nov 29 00:02:51 np0005539482 systemd-logind[793]: Removed session 8.
Nov 29 00:02:56 np0005539482 systemd-logind[793]: New session 9 of user zuul.
Nov 29 00:02:56 np0005539482 systemd[1]: Started Session 9 of User zuul.
Nov 29 00:02:57 np0005539482 python3.9[45221]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:02:58 np0005539482 python3.9[45377]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 00:02:59 np0005539482 python3.9[45530]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 00:03:00 np0005539482 python3.9[45688]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 00:03:01 np0005539482 python3.9[45848]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:03:02 np0005539482 python3.9[45932]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 00:03:05 np0005539482 python3.9[46097]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:03:16 np0005539482 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 00:03:16 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:03:16 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:03:16 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:03:16 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:03:16 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:03:16 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:03:16 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:03:16 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 00:03:16 np0005539482 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 00:03:17 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:03:17 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:03:18 np0005539482 systemd[1]: Reloading.
Nov 29 00:03:18 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:03:18 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:03:18 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:03:18 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:03:18 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:03:18 np0005539482 systemd[1]: run-r9d5a5e8d40854739bbe0c317674b6ed0.service: Deactivated successfully.
Nov 29 00:03:19 np0005539482 python3.9[47195]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:03:20 np0005539482 systemd[1]: Reloading.
Nov 29 00:03:21 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:03:21 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:03:21 np0005539482 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 00:03:21 np0005539482 chown[47237]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 00:03:21 np0005539482 ovs-ctl[47242]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 00:03:21 np0005539482 ovs-ctl[47242]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 00:03:21 np0005539482 ovs-ctl[47242]: Starting ovsdb-server [  OK  ]
Nov 29 00:03:21 np0005539482 ovs-vsctl[47291]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 00:03:21 np0005539482 ovs-vsctl[47307]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"63cfe9d2-e938-418d-9401-5d1a600b4ede\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 00:03:21 np0005539482 ovs-ctl[47242]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 00:03:21 np0005539482 ovs-vsctl[47313]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 00:03:21 np0005539482 ovs-ctl[47242]: Enabling remote OVSDB managers [  OK  ]
Nov 29 00:03:21 np0005539482 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 00:03:21 np0005539482 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 00:03:21 np0005539482 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 00:03:21 np0005539482 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 00:03:21 np0005539482 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 00:03:21 np0005539482 ovs-ctl[47362]: Inserting openvswitch module [  OK  ]
Nov 29 00:03:21 np0005539482 ovs-ctl[47331]: Starting ovs-vswitchd [  OK  ]
Nov 29 00:03:21 np0005539482 ovs-vsctl[47380]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 00:03:21 np0005539482 ovs-ctl[47331]: Enabling remote OVSDB managers [  OK  ]
Nov 29 00:03:21 np0005539482 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 00:03:21 np0005539482 systemd[1]: Starting Open vSwitch...
Nov 29 00:03:21 np0005539482 systemd[1]: Finished Open vSwitch.
Nov 29 00:03:23 np0005539482 python3.9[47532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:03:23 np0005539482 python3.9[47684]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 00:03:25 np0005539482 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 00:03:25 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:03:25 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:03:25 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:03:25 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:03:25 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:03:25 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:03:25 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:03:26 np0005539482 python3.9[47839]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:03:26 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 00:03:27 np0005539482 python3.9[47997]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:03:29 np0005539482 python3.9[48150]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:03:30 np0005539482 python3.9[48437]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 00:03:31 np0005539482 python3.9[48587]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:03:32 np0005539482 python3.9[48741]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:03:33 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:03:33 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:03:33 np0005539482 systemd[1]: Reloading.
Nov 29 00:03:33 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:03:33 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:03:34 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:03:34 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:03:34 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:03:34 np0005539482 systemd[1]: run-r5f0dbb3c43ee4272912c43634e6528ec.service: Deactivated successfully.
Nov 29 00:03:35 np0005539482 python3.9[49059]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:03:35 np0005539482 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 00:03:35 np0005539482 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 00:03:35 np0005539482 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 00:03:35 np0005539482 NetworkManager[7200]: <info>  [1764392615.5059] caught SIGTERM, shutting down normally.
Nov 29 00:03:35 np0005539482 systemd[1]: Stopping Network Manager...
Nov 29 00:03:35 np0005539482 NetworkManager[7200]: <info>  [1764392615.5070] dhcp4 (eth0): canceled DHCP transaction
Nov 29 00:03:35 np0005539482 NetworkManager[7200]: <info>  [1764392615.5070] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:03:35 np0005539482 NetworkManager[7200]: <info>  [1764392615.5070] dhcp4 (eth0): state changed no lease
Nov 29 00:03:35 np0005539482 NetworkManager[7200]: <info>  [1764392615.5073] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 00:03:35 np0005539482 NetworkManager[7200]: <info>  [1764392615.5132] exiting (success)
Nov 29 00:03:35 np0005539482 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 00:03:35 np0005539482 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 00:03:35 np0005539482 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 00:03:35 np0005539482 systemd[1]: Stopped Network Manager.
Nov 29 00:03:35 np0005539482 systemd[1]: NetworkManager.service: Consumed 11.554s CPU time, 4.1M memory peak, read 0B from disk, written 21.5K to disk.
Nov 29 00:03:35 np0005539482 systemd[1]: Starting Network Manager...
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.6082] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:919d61e4-148b-4df4-a773-feb4933c1c42)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.6083] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.6138] manager[0x5587cb08d090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 00:03:35 np0005539482 systemd[1]: Starting Hostname Service...
Nov 29 00:03:35 np0005539482 systemd[1]: Started Hostname Service.
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7158] hostname: hostname: using hostnamed
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7160] hostname: static hostname changed from (none) to "compute-0"
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7164] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7167] manager[0x5587cb08d090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7167] manager[0x5587cb08d090]: rfkill: WWAN hardware radio set enabled
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7185] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7193] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7194] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7194] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7195] manager: Networking is enabled by state file
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7196] settings: Loaded settings plugin: keyfile (internal)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7199] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7221] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7229] dhcp: init: Using DHCP client 'internal'
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7231] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7236] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7240] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7246] device (lo): Activation: starting connection 'lo' (aeac58a6-e034-4337-948c-d58870c36302)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7252] device (eth0): carrier: link connected
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7255] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7259] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7259] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7264] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7270] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7275] device (eth1): carrier: link connected
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7278] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7283] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4) (indicated)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7284] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7290] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7296] device (eth1): Activation: starting connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7303] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 00:03:35 np0005539482 systemd[1]: Started Network Manager.
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7309] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7311] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7313] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7315] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7317] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7319] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7320] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7321] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7325] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7327] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7336] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7349] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7366] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7370] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7428] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7433] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7434] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7438] device (lo): Activation: successful, device activated.
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7695] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7697] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7699] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7703] device (eth1): Activation: successful, device activated.
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7716] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7717] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7721] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7725] device (eth0): Activation: successful, device activated.
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7730] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 00:03:35 np0005539482 NetworkManager[49073]: <info>  [1764392615.7734] manager: startup complete
Nov 29 00:03:35 np0005539482 systemd[1]: Starting Network Manager Wait Online...
Nov 29 00:03:35 np0005539482 systemd[1]: Finished Network Manager Wait Online.
Nov 29 00:03:36 np0005539482 python3.9[49286]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:03:43 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:03:43 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:03:43 np0005539482 systemd[1]: Reloading.
Nov 29 00:03:43 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:03:43 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:03:43 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:03:44 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:03:44 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:03:44 np0005539482 systemd[1]: run-r3f3c4d21c5f04aa6950471f271280895.service: Deactivated successfully.
Nov 29 00:03:45 np0005539482 python3.9[49746]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:03:45 np0005539482 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 00:03:46 np0005539482 python3.9[49898]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:46 np0005539482 python3.9[50052]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:47 np0005539482 python3.9[50204]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:48 np0005539482 python3.9[50356]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:48 np0005539482 python3.9[50508]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:49 np0005539482 python3.9[50660]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:03:50 np0005539482 python3.9[50783]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392629.1658833-229-70588590808844/.source _original_basename=.n18vukfd follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:50 np0005539482 python3.9[50935]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:51 np0005539482 python3.9[51087]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 00:03:52 np0005539482 python3.9[51239]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:03:55 np0005539482 python3.9[51666]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 00:03:56 np0005539482 ansible-async_wrapper.py[51841]: Invoked with j810879445122 300 /home/zuul/.ansible/tmp/ansible-tmp-1764392635.5922756-295-165022383322458/AnsiballZ_edpm_os_net_config.py _
Nov 29 00:03:56 np0005539482 ansible-async_wrapper.py[51844]: Starting module and watcher
Nov 29 00:03:56 np0005539482 ansible-async_wrapper.py[51844]: Start watching 51845 (300)
Nov 29 00:03:56 np0005539482 ansible-async_wrapper.py[51845]: Start module (51845)
Nov 29 00:03:56 np0005539482 ansible-async_wrapper.py[51841]: Return async_wrapper task started.
Nov 29 00:03:56 np0005539482 python3.9[51846]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 00:03:57 np0005539482 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 00:03:57 np0005539482 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 00:03:57 np0005539482 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 00:03:57 np0005539482 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 00:03:57 np0005539482 kernel: cfg80211: failed to load regulatory.db
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6018] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6030] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6444] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6446] audit: op="connection-add" uuid="7134afe0-a31f-4294-bb07-316f3a9e03e9" name="br-ex-br" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6459] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6461] audit: op="connection-add" uuid="4bf186af-c248-48ba-a07d-3c0e65d194df" name="br-ex-port" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6470] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6472] audit: op="connection-add" uuid="bd97148d-e4f7-4765-87ee-c00ec35a7ccc" name="eth1-port" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6482] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6483] audit: op="connection-add" uuid="f0efa15f-cb45-45d2-bb7d-16b52fe1d2b2" name="vlan20-port" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6494] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6496] audit: op="connection-add" uuid="9d83f726-2e99-487a-a917-6d4c8d3c35c4" name="vlan21-port" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6505] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6506] audit: op="connection-add" uuid="56d4e035-1f1c-402e-a0b9-5300d1d08bf7" name="vlan22-port" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6516] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6518] audit: op="connection-add" uuid="d8366d4d-6eb8-4359-a534-94e4585031a4" name="vlan23-port" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6535] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6549] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6551] audit: op="connection-add" uuid="ab610bb0-cf0e-449c-b95c-b2b3a1383e00" name="br-ex-if" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6600] audit: op="connection-update" uuid="ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4" name="ci-private-network" args="ipv4.method,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.routes,ovs-external-ids.data,connection.controller,connection.master,connection.port-type,connection.slave-type,connection.timestamp,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.routes,ipv6.routing-rules,ovs-interface.type" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6615] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6616] audit: op="connection-add" uuid="c413ae8e-9915-4a9f-ae3c-de6da5b56e0e" name="vlan20-if" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6630] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6632] audit: op="connection-add" uuid="80d50742-f63a-4985-aaeb-ea9f89dcf489" name="vlan21-if" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6645] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6646] audit: op="connection-add" uuid="be0c2d3d-189e-4692-a4f8-0b760f1e6e68" name="vlan22-if" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6661] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6662] audit: op="connection-add" uuid="a2d772d0-ce56-40ed-b7f3-df914f508e4e" name="vlan23-if" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6672] audit: op="connection-delete" uuid="68471d98-bb78-39be-9a57-275a98f2e1d6" name="Wired connection 1" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6682] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6691] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6695] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7134afe0-a31f-4294-bb07-316f3a9e03e9)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6696] audit: op="connection-activate" uuid="7134afe0-a31f-4294-bb07-316f3a9e03e9" name="br-ex-br" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6697] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6703] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6707] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (4bf186af-c248-48ba-a07d-3c0e65d194df)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6708] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6714] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6717] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (bd97148d-e4f7-4765-87ee-c00ec35a7ccc)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6719] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6725] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6729] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f0efa15f-cb45-45d2-bb7d-16b52fe1d2b2)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6730] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6736] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6739] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9d83f726-2e99-487a-a917-6d4c8d3c35c4)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6741] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6747] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6751] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (56d4e035-1f1c-402e-a0b9-5300d1d08bf7)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6753] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6758] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6762] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d8366d4d-6eb8-4359-a534-94e4585031a4)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6763] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6765] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6767] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6772] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6777] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6780] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ab610bb0-cf0e-449c-b95c-b2b3a1383e00)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6781] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6784] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6786] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6787] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6788] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6798] device (eth1): disconnecting for new activation request.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6798] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6801] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6803] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6804] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6807] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6811] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6815] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c413ae8e-9915-4a9f-ae3c-de6da5b56e0e)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6815] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6818] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6820] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6821] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6824] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6828] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6832] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (80d50742-f63a-4985-aaeb-ea9f89dcf489)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6833] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6835] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6837] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6838] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6840] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6845] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6849] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (be0c2d3d-189e-4692-a4f8-0b760f1e6e68)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6850] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6852] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6854] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6855] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6858] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6862] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6866] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (a2d772d0-ce56-40ed-b7f3-df914f508e4e)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6867] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6870] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6871] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6873] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6874] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6884] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6886] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6889] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6891] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6896] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6899] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6903] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6906] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6908] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6912] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6916] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6919] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6920] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 kernel: ovs-system: entered promiscuous mode
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6925] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6929] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6932] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6933] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6938] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6941] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6944] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6946] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6950] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 kernel: Timeout policy base is empty
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6953] dhcp4 (eth0): canceled DHCP transaction
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6953] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6953] dhcp4 (eth0): state changed no lease
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6954] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 00:03:58 np0005539482 systemd-udevd[51852]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6962] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.6964] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51847 uid=0 result="fail" reason="Device is not activated"
Nov 29 00:03:58 np0005539482 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7006] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7010] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7017] device (eth1): disconnecting for new activation request.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7017] audit: op="connection-activate" uuid="ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4" name="ci-private-network" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7057] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7065] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7071] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7085] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 00:03:58 np0005539482 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7164] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7270] device (eth1): Activation: starting connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7282] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7286] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7294] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7296] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7298] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7300] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7303] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7305] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7306] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7317] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7324] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7327] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7332] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7337] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7340] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7344] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7348] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7352] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7356] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7361] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7365] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7368] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7372] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 kernel: br-ex: entered promiscuous mode
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7377] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7385] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7393] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7450] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7456] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7465] device (eth1): Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7519] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7540] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 kernel: vlan22: entered promiscuous mode
Nov 29 00:03:58 np0005539482 kernel: vlan23: entered promiscuous mode
Nov 29 00:03:58 np0005539482 systemd-udevd[51853]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7650] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7651] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7659] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 kernel: vlan21: entered promiscuous mode
Nov 29 00:03:58 np0005539482 systemd-udevd[51851]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7754] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7767] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 kernel: vlan20: entered promiscuous mode
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7785] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7787] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7804] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7815] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7824] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7880] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7881] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7885] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7893] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7906] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7916] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7934] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7975] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7976] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7977] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7983] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7987] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:03:58 np0005539482 NetworkManager[49073]: <info>  [1764392638.7993] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 00:03:59 np0005539482 NetworkManager[49073]: <info>  [1764392639.9335] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.1078] checkpoint[0x5587cb063950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.1081] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.3744] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.3753] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 python3.9[52207]: ansible-ansible.legacy.async_status Invoked with jid=j810879445122.51841 mode=status _async_dir=/root/.ansible_async
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.5648] audit: op="networking-control" arg="global-dns-configuration" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.5678] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.5703] audit: op="networking-control" arg="global-dns-configuration" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.5737] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.7294] checkpoint[0x5587cb063a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 00:04:00 np0005539482 NetworkManager[49073]: <info>  [1764392640.7302] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 00:04:00 np0005539482 ansible-async_wrapper.py[51845]: Module complete (51845)
Nov 29 00:04:01 np0005539482 ansible-async_wrapper.py[51844]: Done in kid B.
Nov 29 00:04:03 np0005539482 python3.9[52311]: ansible-ansible.legacy.async_status Invoked with jid=j810879445122.51841 mode=status _async_dir=/root/.ansible_async
Nov 29 00:04:04 np0005539482 python3.9[52411]: ansible-ansible.legacy.async_status Invoked with jid=j810879445122.51841 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 00:04:05 np0005539482 python3.9[52563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:04:05 np0005539482 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 00:04:05 np0005539482 python3.9[52686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392644.831004-322-54733926492473/.source.returncode _original_basename=.sumd37f6 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:04:06 np0005539482 python3.9[52840]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:04:07 np0005539482 python3.9[52963]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392646.0377102-338-46512492591641/.source.cfg _original_basename=.sbmwoq28 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:04:07 np0005539482 python3.9[53116]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:04:07 np0005539482 systemd[1]: Reloading Network Manager...
Nov 29 00:04:07 np0005539482 NetworkManager[49073]: <info>  [1764392647.9441] audit: op="reload" arg="0" pid=53120 uid=0 result="success"
Nov 29 00:04:07 np0005539482 NetworkManager[49073]: <info>  [1764392647.9453] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 00:04:07 np0005539482 systemd[1]: Reloaded Network Manager.
Nov 29 00:04:08 np0005539482 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 00:04:08 np0005539482 systemd[1]: session-9.scope: Consumed 48.949s CPU time.
Nov 29 00:04:08 np0005539482 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Nov 29 00:04:08 np0005539482 systemd-logind[793]: Removed session 9.
Nov 29 00:04:14 np0005539482 systemd-logind[793]: New session 10 of user zuul.
Nov 29 00:04:14 np0005539482 systemd[1]: Started Session 10 of User zuul.
Nov 29 00:04:15 np0005539482 python3.9[53305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:04:16 np0005539482 python3.9[53459]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:04:17 np0005539482 python3.9[53653]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:04:17 np0005539482 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 00:04:18 np0005539482 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 00:04:18 np0005539482 systemd[1]: session-10.scope: Consumed 2.335s CPU time.
Nov 29 00:04:18 np0005539482 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Nov 29 00:04:18 np0005539482 systemd-logind[793]: Removed session 10.
Nov 29 00:04:24 np0005539482 systemd-logind[793]: New session 11 of user zuul.
Nov 29 00:04:24 np0005539482 systemd[1]: Started Session 11 of User zuul.
Nov 29 00:04:25 np0005539482 python3.9[53836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:04:26 np0005539482 python3.9[53990]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:04:27 np0005539482 python3.9[54147]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:04:28 np0005539482 python3.9[54231]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:04:30 np0005539482 python3.9[54385]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:04:31 np0005539482 python3.9[54580]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:04:32 np0005539482 python3.9[54732]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:04:32 np0005539482 systemd[1]: var-lib-containers-storage-overlay-compat2518162104-merged.mount: Deactivated successfully.
Nov 29 00:04:32 np0005539482 podman[54733]: 2025-11-29 05:04:32.560585852 +0000 UTC m=+0.043444690 system refresh
Nov 29 00:04:33 np0005539482 python3.9[54895]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:04:33 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:04:34 np0005539482 python3.9[55018]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392672.7645547-79-253364804622671/.source.json follow=False _original_basename=podman_network_config.j2 checksum=66982087fa23b413eb440583f0a34253a177e035 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:04:34 np0005539482 python3.9[55170]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:04:35 np0005539482 python3.9[55293]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392674.2821767-94-32409837467201/.source.conf follow=False _original_basename=registries.conf.j2 checksum=b723c254c5347521a0bd9978182359a7d08823fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:04:36 np0005539482 python3.9[55445]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:04:36 np0005539482 python3.9[55597]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:04:37 np0005539482 python3.9[55749]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:04:38 np0005539482 python3.9[55901]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:04:39 np0005539482 python3.9[56053]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:04:41 np0005539482 python3.9[56206]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:04:42 np0005539482 python3.9[56360]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:04:43 np0005539482 python3.9[56512]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:04:44 np0005539482 python3.9[56664]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:04:45 np0005539482 python3.9[56817]: ansible-service_facts Invoked
Nov 29 00:04:45 np0005539482 network[56834]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 00:04:45 np0005539482 network[56835]: 'network-scripts' will be removed from distribution in near future.
Nov 29 00:04:45 np0005539482 network[56836]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 00:04:51 np0005539482 python3.9[57288]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:04:54 np0005539482 python3.9[57441]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 00:04:55 np0005539482 python3.9[57593]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:04:56 np0005539482 python3.9[57718]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392695.2488394-238-205140311183879/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:04:57 np0005539482 python3.9[57872]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:04:57 np0005539482 python3.9[57997]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392696.6753645-253-269650056647101/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:04:58 np0005539482 python3.9[58151]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:00 np0005539482 python3.9[58305]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:05:01 np0005539482 python3.9[58389]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:05:02 np0005539482 python3.9[58543]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:05:03 np0005539482 python3.9[58627]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:05:03 np0005539482 chronyd[785]: chronyd exiting
Nov 29 00:05:03 np0005539482 systemd[1]: Stopping NTP client/server...
Nov 29 00:05:03 np0005539482 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 00:05:03 np0005539482 systemd[1]: Stopped NTP client/server.
Nov 29 00:05:03 np0005539482 systemd[1]: Starting NTP client/server...
Nov 29 00:05:03 np0005539482 chronyd[58635]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 00:05:03 np0005539482 chronyd[58635]: Frequency -23.273 +/- 0.238 ppm read from /var/lib/chrony/drift
Nov 29 00:05:03 np0005539482 chronyd[58635]: Loaded seccomp filter (level 2)
Nov 29 00:05:03 np0005539482 systemd[1]: Started NTP client/server.
Nov 29 00:05:04 np0005539482 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 00:05:04 np0005539482 systemd[1]: session-11.scope: Consumed 26.081s CPU time.
Nov 29 00:05:04 np0005539482 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Nov 29 00:05:04 np0005539482 systemd-logind[793]: Removed session 11.
Nov 29 00:05:09 np0005539482 systemd-logind[793]: New session 12 of user zuul.
Nov 29 00:05:09 np0005539482 systemd[1]: Started Session 12 of User zuul.
Nov 29 00:05:10 np0005539482 python3.9[58816]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:11 np0005539482 python3.9[58968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:12 np0005539482 python3.9[59091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392710.7651577-34-172661159434291/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:12 np0005539482 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 00:05:12 np0005539482 systemd[1]: session-12.scope: Consumed 1.635s CPU time.
Nov 29 00:05:12 np0005539482 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Nov 29 00:05:12 np0005539482 systemd-logind[793]: Removed session 12.
Nov 29 00:05:17 np0005539482 systemd-logind[793]: New session 13 of user zuul.
Nov 29 00:05:17 np0005539482 systemd[1]: Started Session 13 of User zuul.
Nov 29 00:05:18 np0005539482 python3.9[59269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:05:19 np0005539482 python3.9[59425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:20 np0005539482 python3.9[59600]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:21 np0005539482 python3.9[59723]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764392719.8379722-41-240430673059201/.source.json _original_basename=.04p3dp1p follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:22 np0005539482 python3.9[59875]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:23 np0005539482 python3.9[59998]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392721.952187-64-215496796275300/.source _original_basename=.vmt1b2vi follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:24 np0005539482 python3.9[60150]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:05:24 np0005539482 python3.9[60302]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:25 np0005539482 python3.9[60425]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392724.3063686-88-248172249780150/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:05:26 np0005539482 python3.9[60577]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:26 np0005539482 python3.9[60700]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392725.7499294-88-268025956672618/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:05:27 np0005539482 python3.9[60852]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:28 np0005539482 python3.9[61004]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:29 np0005539482 python3.9[61127]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392728.0305223-125-274526036898020/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:30 np0005539482 python3.9[61279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:30 np0005539482 python3.9[61402]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392729.4679744-140-274158157628362/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:32 np0005539482 python3.9[61554]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:05:32 np0005539482 systemd[1]: Reloading.
Nov 29 00:05:32 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:05:32 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:05:32 np0005539482 systemd[1]: Reloading.
Nov 29 00:05:32 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:05:32 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:05:32 np0005539482 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 00:05:32 np0005539482 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 00:05:33 np0005539482 python3.9[61782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:34 np0005539482 python3.9[61905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392732.8973076-163-81156012680332/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:35 np0005539482 python3.9[62057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:35 np0005539482 python3.9[62180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392734.428242-178-169115281750235/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:36 np0005539482 python3.9[62332]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:05:36 np0005539482 systemd[1]: Reloading.
Nov 29 00:05:36 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:05:36 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:05:36 np0005539482 systemd[1]: Reloading.
Nov 29 00:05:37 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:05:37 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:05:37 np0005539482 systemd[1]: Starting Create netns directory...
Nov 29 00:05:37 np0005539482 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 00:05:37 np0005539482 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 00:05:37 np0005539482 systemd[1]: Finished Create netns directory.
Nov 29 00:05:38 np0005539482 python3.9[62556]: ansible-ansible.builtin.service_facts Invoked
Nov 29 00:05:38 np0005539482 network[62573]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 00:05:38 np0005539482 network[62574]: 'network-scripts' will be removed from distribution in near future.
Nov 29 00:05:38 np0005539482 network[62575]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 00:05:42 np0005539482 python3.9[62837]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:05:42 np0005539482 systemd[1]: Reloading.
Nov 29 00:05:42 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:05:42 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:05:42 np0005539482 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 00:05:43 np0005539482 iptables.init[62877]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 00:05:43 np0005539482 iptables.init[62877]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 00:05:43 np0005539482 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 00:05:43 np0005539482 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 00:05:44 np0005539482 python3.9[63073]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:05:45 np0005539482 python3.9[63227]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:05:45 np0005539482 systemd[1]: Reloading.
Nov 29 00:05:45 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:05:45 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:05:45 np0005539482 systemd[1]: Starting Netfilter Tables...
Nov 29 00:05:45 np0005539482 systemd[1]: Finished Netfilter Tables.
Nov 29 00:05:46 np0005539482 python3.9[63418]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:05:47 np0005539482 python3.9[63571]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:48 np0005539482 python3.9[63696]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392746.9505754-247-102569168809996/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:49 np0005539482 python3.9[63849]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:05:49 np0005539482 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 00:05:49 np0005539482 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 00:05:50 np0005539482 python3.9[64005]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:50 np0005539482 python3.9[64157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:51 np0005539482 python3.9[64280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392750.296178-278-87649035487829/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:52 np0005539482 python3.9[64432]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 00:05:52 np0005539482 systemd[1]: Starting Time & Date Service...
Nov 29 00:05:52 np0005539482 systemd[1]: Started Time & Date Service.
Nov 29 00:05:53 np0005539482 python3.9[64589]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:54 np0005539482 python3.9[64741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:55 np0005539482 python3.9[64864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392754.0648236-313-255410796355777/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:56 np0005539482 python3.9[65018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:56 np0005539482 python3.9[65141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392755.4021316-328-12794856841765/.source.yaml _original_basename=.jvdpze1n follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:57 np0005539482 python3.9[65293]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:05:58 np0005539482 python3.9[65416]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392756.8597713-343-43015515002686/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:05:58 np0005539482 python3.9[65568]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:05:59 np0005539482 python3.9[65721]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:06:00 np0005539482 python3[65874]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 00:06:01 np0005539482 python3.9[66026]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:06:02 np0005539482 python3.9[66149]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392760.9234476-382-89658620873407/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:03 np0005539482 python3.9[66301]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:06:03 np0005539482 python3.9[66424]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392762.5360312-397-61039317549493/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:04 np0005539482 python3.9[66576]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:06:05 np0005539482 python3.9[66699]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392763.8853915-412-104477720383442/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:05 np0005539482 python3.9[66851]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:06:06 np0005539482 python3.9[66974]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392765.2667303-427-239337597213465/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:07 np0005539482 python3.9[67126]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:06:07 np0005539482 python3.9[67249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392766.492575-442-203027630620069/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:08 np0005539482 python3.9[67401]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:09 np0005539482 python3.9[67553]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:06:10 np0005539482 python3.9[67712]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:11 np0005539482 python3.9[67865]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:12 np0005539482 python3.9[68017]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:13 np0005539482 python3.9[68169]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 00:06:14 np0005539482 python3.9[68322]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 00:06:14 np0005539482 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 00:06:14 np0005539482 systemd[1]: session-13.scope: Consumed 41.017s CPU time.
Nov 29 00:06:14 np0005539482 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Nov 29 00:06:14 np0005539482 systemd-logind[793]: Removed session 13.
Nov 29 00:06:20 np0005539482 systemd-logind[793]: New session 14 of user zuul.
Nov 29 00:06:20 np0005539482 systemd[1]: Started Session 14 of User zuul.
Nov 29 00:06:21 np0005539482 python3.9[68503]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 00:06:22 np0005539482 python3.9[68655]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:06:22 np0005539482 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 00:06:23 np0005539482 python3.9[68809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:06:24 np0005539482 python3.9[68961]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMckHMduWmwA/jneofKzqltVrdb/vEVNoPwADfQfHjxo2ViAjKtzRJxQm+bTvpTXgt3d3GaLwohXhYMtcnWss0rEYtIGMLiXWJAB76Vi4azFd32Hy0mDTGhpqL5tz3X/QJFmASZVWlpRz77RZoFzhuMtQpF581gmKi8QLN3n4kyPvi8IBRjIvdbSyN1hkk5nbYZFrdOhA0K7FLalaYs9fIyoD0rH+dijNp/mY8EbyOAWiPIFfzMZWqy9OkXlUKH6233dlpLGCHfD1uwqM55rv7g+qtOrKiOnqkc5b24MfjM3Dq8B/kIR3GisItM2fI/avStY0whFRyYPTqysal5H+pXy5+QCOGwsWv0POhypuwSVSbtY3NcfizytHcPT2Au6g3Xx/Gazoxx4fVkVLTjtzhz8URfMzAclsZVcUxtFyZlGHtoXumLkWdYeLYQA4dqkQVL7KwOEQp31HXuBfsc98k/UoOj9+SAEbQrLsEBhRXTSsD2bL350GMA7poDjiSC1k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwQmzwqCS97U8wjy82krUlVUeH2sOvejp9p1btw+sbe#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHbvzG6Snia8dc8X++wUykISUD7zTpLyaTM0CVExLn67fyxHoL2pCwIcx6cP7HnIRC6S3Et2Ooooe+xc0kenKn0=#012 create=True mode=0644 path=/tmp/ansible.pwh8t6zh state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:25 np0005539482 python3.9[69113]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.pwh8t6zh' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:06:26 np0005539482 python3.9[69267]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.pwh8t6zh state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:26 np0005539482 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 00:06:26 np0005539482 systemd[1]: session-14.scope: Consumed 3.591s CPU time.
Nov 29 00:06:26 np0005539482 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Nov 29 00:06:26 np0005539482 systemd-logind[793]: Removed session 14.
Nov 29 00:06:31 np0005539482 systemd-logind[793]: New session 15 of user zuul.
Nov 29 00:06:31 np0005539482 systemd[1]: Started Session 15 of User zuul.
Nov 29 00:06:32 np0005539482 python3.9[69445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:06:34 np0005539482 python3.9[69601]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 00:06:35 np0005539482 python3.9[69755]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:06:36 np0005539482 python3.9[69908]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:06:37 np0005539482 python3.9[70061]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:06:37 np0005539482 python3.9[70215]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:06:39 np0005539482 python3.9[70370]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:06:39 np0005539482 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 00:06:39 np0005539482 systemd[1]: session-15.scope: Consumed 5.153s CPU time.
Nov 29 00:06:39 np0005539482 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Nov 29 00:06:39 np0005539482 systemd-logind[793]: Removed session 15.
Nov 29 00:06:44 np0005539482 systemd-logind[793]: New session 16 of user zuul.
Nov 29 00:06:44 np0005539482 systemd[1]: Started Session 16 of User zuul.
Nov 29 00:06:45 np0005539482 python3.9[70548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:06:46 np0005539482 python3.9[70704]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:06:47 np0005539482 python3.9[70788]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 00:06:49 np0005539482 python3.9[70939]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:06:51 np0005539482 python3.9[71090]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 00:06:52 np0005539482 python3.9[71240]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:06:52 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:06:52 np0005539482 python3.9[71391]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:06:53 np0005539482 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 00:06:53 np0005539482 systemd[1]: session-16.scope: Consumed 5.935s CPU time.
Nov 29 00:06:53 np0005539482 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Nov 29 00:06:53 np0005539482 systemd-logind[793]: Removed session 16.
Nov 29 00:07:01 np0005539482 systemd-logind[793]: New session 17 of user zuul.
Nov 29 00:07:01 np0005539482 systemd[1]: Started Session 17 of User zuul.
Nov 29 00:07:06 np0005539482 python3[72157]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:07:08 np0005539482 python3[72252]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 00:07:09 np0005539482 python3[72279]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:07:10 np0005539482 python3[72305]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:10 np0005539482 kernel: loop: module loaded
Nov 29 00:07:10 np0005539482 kernel: loop3: detected capacity change from 0 to 41943040
Nov 29 00:07:10 np0005539482 python3[72340]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:10 np0005539482 lvm[72343]: PV /dev/loop3 not used.
Nov 29 00:07:10 np0005539482 lvm[72352]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 00:07:10 np0005539482 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 00:07:10 np0005539482 lvm[72354]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 00:07:10 np0005539482 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 00:07:11 np0005539482 python3[72432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:07:11 np0005539482 python3[72505]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392830.8811626-36184-150558833563769/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:07:12 np0005539482 python3[72555]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:07:12 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:12 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:12 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:12 np0005539482 systemd[1]: Starting Ceph OSD losetup...
Nov 29 00:07:12 np0005539482 bash[72596]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Nov 29 00:07:12 np0005539482 systemd[1]: Finished Ceph OSD losetup.
Nov 29 00:07:12 np0005539482 lvm[72597]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 00:07:12 np0005539482 lvm[72597]: VG ceph_vg0 finished
Nov 29 00:07:12 np0005539482 python3[72623]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 00:07:13 np0005539482 chronyd[58635]: Selected source 137.220.55.211 (pool.ntp.org)
Nov 29 00:07:14 np0005539482 python3[72650]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:07:14 np0005539482 python3[72676]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:14 np0005539482 kernel: loop4: detected capacity change from 0 to 41943040
Nov 29 00:07:14 np0005539482 python3[72708]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:14 np0005539482 lvm[72711]: PV /dev/loop4 not used.
Nov 29 00:07:14 np0005539482 lvm[72721]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 00:07:15 np0005539482 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 29 00:07:15 np0005539482 lvm[72723]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 29 00:07:15 np0005539482 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 29 00:07:15 np0005539482 python3[72801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:07:15 np0005539482 python3[72874]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392835.133401-36211-224069913332695/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:07:16 np0005539482 python3[72924]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:07:16 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:16 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:16 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:16 np0005539482 systemd[1]: Starting Ceph OSD losetup...
Nov 29 00:07:16 np0005539482 bash[72964]: /dev/loop4: [64513]:4327966 (/var/lib/ceph-osd-1.img)
Nov 29 00:07:16 np0005539482 systemd[1]: Finished Ceph OSD losetup.
Nov 29 00:07:16 np0005539482 lvm[72965]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 00:07:16 np0005539482 lvm[72965]: VG ceph_vg1 finished
Nov 29 00:07:16 np0005539482 python3[72991]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 00:07:18 np0005539482 python3[73018]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:07:18 np0005539482 python3[73044]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:18 np0005539482 kernel: loop5: detected capacity change from 0 to 41943040
Nov 29 00:07:19 np0005539482 python3[73076]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:19 np0005539482 lvm[73079]: PV /dev/loop5 not used.
Nov 29 00:07:19 np0005539482 lvm[73088]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 00:07:19 np0005539482 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 29 00:07:19 np0005539482 lvm[73090]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 29 00:07:19 np0005539482 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 29 00:07:19 np0005539482 python3[73168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:07:20 np0005539482 python3[73241]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392839.509975-36238-40797589490747/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:07:20 np0005539482 python3[73291]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:07:20 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:20 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:20 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:20 np0005539482 systemd[1]: Starting Ceph OSD losetup...
Nov 29 00:07:20 np0005539482 bash[73331]: /dev/loop5: [64513]:4328625 (/var/lib/ceph-osd-2.img)
Nov 29 00:07:20 np0005539482 systemd[1]: Finished Ceph OSD losetup.
Nov 29 00:07:20 np0005539482 lvm[73332]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 00:07:20 np0005539482 lvm[73332]: VG ceph_vg2 finished
Nov 29 00:07:22 np0005539482 python3[73356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:07:25 np0005539482 python3[73449]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 00:07:26 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:07:26 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:07:27 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:07:27 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:07:27 np0005539482 systemd[1]: run-rbb210d0058144053a79b82b8cc8ed591.service: Deactivated successfully.
Nov 29 00:07:27 np0005539482 python3[73560]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:07:27 np0005539482 python3[73588]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:27 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:27 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:28 np0005539482 python3[73652]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:07:28 np0005539482 python3[73678]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:07:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:29 np0005539482 python3[73756]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:07:29 np0005539482 python3[73829]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392849.3172789-36385-221335497443026/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:07:30 np0005539482 python3[73931]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:07:31 np0005539482 python3[74004]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392850.3555725-36403-100945960930423/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:07:31 np0005539482 python3[74054]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:07:31 np0005539482 python3[74082]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:07:32 np0005539482 python3[74110]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:07:32 np0005539482 python3[74138]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:07:32 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:32 np0005539482 systemd-logind[793]: New session 18 of user ceph-admin.
Nov 29 00:07:32 np0005539482 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 00:07:32 np0005539482 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 00:07:32 np0005539482 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 00:07:32 np0005539482 systemd[1]: Starting User Manager for UID 42477...
Nov 29 00:07:33 np0005539482 systemd[74158]: Queued start job for default target Main User Target.
Nov 29 00:07:33 np0005539482 systemd[74158]: Created slice User Application Slice.
Nov 29 00:07:33 np0005539482 systemd[74158]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 00:07:33 np0005539482 systemd[74158]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 00:07:33 np0005539482 systemd[74158]: Reached target Paths.
Nov 29 00:07:33 np0005539482 systemd[74158]: Reached target Timers.
Nov 29 00:07:33 np0005539482 systemd[74158]: Starting D-Bus User Message Bus Socket...
Nov 29 00:07:33 np0005539482 systemd[74158]: Starting Create User's Volatile Files and Directories...
Nov 29 00:07:33 np0005539482 systemd[74158]: Listening on D-Bus User Message Bus Socket.
Nov 29 00:07:33 np0005539482 systemd[74158]: Reached target Sockets.
Nov 29 00:07:33 np0005539482 systemd[74158]: Finished Create User's Volatile Files and Directories.
Nov 29 00:07:33 np0005539482 systemd[74158]: Reached target Basic System.
Nov 29 00:07:33 np0005539482 systemd[74158]: Reached target Main User Target.
Nov 29 00:07:33 np0005539482 systemd[74158]: Startup finished in 156ms.
Nov 29 00:07:33 np0005539482 systemd[1]: Started User Manager for UID 42477.
Nov 29 00:07:33 np0005539482 systemd[1]: Started Session 18 of User ceph-admin.
Nov 29 00:07:33 np0005539482 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 00:07:33 np0005539482 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Nov 29 00:07:33 np0005539482 systemd-logind[793]: Removed session 18.
Nov 29 00:07:35 np0005539482 systemd[1]: var-lib-containers-storage-overlay-compat2735247542-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 00:07:43 np0005539482 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 00:07:43 np0005539482 systemd[74158]: Activating special unit Exit the Session...
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped target Main User Target.
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped target Basic System.
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped target Paths.
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped target Sockets.
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped target Timers.
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 00:07:43 np0005539482 systemd[74158]: Closed D-Bus User Message Bus Socket.
Nov 29 00:07:43 np0005539482 systemd[74158]: Stopped Create User's Volatile Files and Directories.
Nov 29 00:07:43 np0005539482 systemd[74158]: Removed slice User Application Slice.
Nov 29 00:07:43 np0005539482 systemd[74158]: Reached target Shutdown.
Nov 29 00:07:43 np0005539482 systemd[74158]: Finished Exit the Session.
Nov 29 00:07:43 np0005539482 systemd[74158]: Reached target Exit the Session.
Nov 29 00:07:43 np0005539482 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 00:07:43 np0005539482 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 00:07:43 np0005539482 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 00:07:43 np0005539482 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 00:07:43 np0005539482 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 00:07:43 np0005539482 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 00:07:43 np0005539482 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 00:07:47 np0005539482 podman[74212]: 2025-11-29 05:07:47.190577486 +0000 UTC m=+13.879009126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:47 np0005539482 podman[74300]: 2025-11-29 05:07:47.279468559 +0000 UTC m=+0.051918607 container create f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:07:47 np0005539482 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 00:07:47 np0005539482 systemd[1]: Started libpod-conmon-f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7.scope.
Nov 29 00:07:47 np0005539482 podman[74300]: 2025-11-29 05:07:47.257194114 +0000 UTC m=+0.029644162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:47 np0005539482 podman[74300]: 2025-11-29 05:07:47.410645164 +0000 UTC m=+0.183095242 container init f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:07:47 np0005539482 podman[74300]: 2025-11-29 05:07:47.420969813 +0000 UTC m=+0.193419861 container start f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:47 np0005539482 podman[74300]: 2025-11-29 05:07:47.424703302 +0000 UTC m=+0.197153360 container attach f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:07:47 np0005539482 sweet_dhawan[74316]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 00:07:47 np0005539482 systemd[1]: libpod-f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7.scope: Deactivated successfully.
Nov 29 00:07:47 np0005539482 podman[74300]: 2025-11-29 05:07:47.723199802 +0000 UTC m=+0.495649860 container died f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 00:07:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d542b808386550c61e5bd03f6c29417f304a01ee2309111c51861fbd24a20eb8-merged.mount: Deactivated successfully.
Nov 29 00:07:47 np0005539482 podman[74300]: 2025-11-29 05:07:47.773781546 +0000 UTC m=+0.546231624 container remove f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:07:47 np0005539482 systemd[1]: libpod-conmon-f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7.scope: Deactivated successfully.
Nov 29 00:07:47 np0005539482 podman[74331]: 2025-11-29 05:07:47.858306753 +0000 UTC m=+0.057784977 container create 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:47 np0005539482 systemd[1]: Started libpod-conmon-45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0.scope.
Nov 29 00:07:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:47 np0005539482 podman[74331]: 2025-11-29 05:07:47.830764192 +0000 UTC m=+0.030242526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:47 np0005539482 podman[74331]: 2025-11-29 05:07:47.928549608 +0000 UTC m=+0.128027832 container init 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:47 np0005539482 podman[74331]: 2025-11-29 05:07:47.934540162 +0000 UTC m=+0.134018386 container start 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:07:47 np0005539482 podman[74331]: 2025-11-29 05:07:47.937607106 +0000 UTC m=+0.137085330 container attach 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:07:47 np0005539482 affectionate_gagarin[74347]: 167 167
Nov 29 00:07:47 np0005539482 systemd[1]: libpod-45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0.scope: Deactivated successfully.
Nov 29 00:07:47 np0005539482 podman[74331]: 2025-11-29 05:07:47.939235234 +0000 UTC m=+0.138713458 container died 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:07:47 np0005539482 podman[74331]: 2025-11-29 05:07:47.976528099 +0000 UTC m=+0.176006323 container remove 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:07:47 np0005539482 systemd[1]: libpod-conmon-45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0.scope: Deactivated successfully.
Nov 29 00:07:48 np0005539482 podman[74364]: 2025-11-29 05:07:48.040188256 +0000 UTC m=+0.041724962 container create 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:07:48 np0005539482 systemd[1]: Started libpod-conmon-4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac.scope.
Nov 29 00:07:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:48 np0005539482 podman[74364]: 2025-11-29 05:07:48.022429569 +0000 UTC m=+0.023966285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:48 np0005539482 podman[74364]: 2025-11-29 05:07:48.122019928 +0000 UTC m=+0.123556654 container init 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:48 np0005539482 podman[74364]: 2025-11-29 05:07:48.133018422 +0000 UTC m=+0.134555168 container start 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 00:07:48 np0005539482 podman[74364]: 2025-11-29 05:07:48.136880285 +0000 UTC m=+0.138417041 container attach 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:48 np0005539482 youthful_boyd[74381]: AQCkfyppi5ddChAAxPeZ4vZwWoNgstL3bFLnKA==
Nov 29 00:07:48 np0005539482 systemd[1]: libpod-4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac.scope: Deactivated successfully.
Nov 29 00:07:48 np0005539482 podman[74364]: 2025-11-29 05:07:48.18002309 +0000 UTC m=+0.181559836 container died 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:07:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0ec3b6396c7dafa3c78018ae6fcf18b20083134316d0e3c697eb9b0fa2cefaba-merged.mount: Deactivated successfully.
Nov 29 00:07:48 np0005539482 podman[74364]: 2025-11-29 05:07:48.227434818 +0000 UTC m=+0.228971564 container remove 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:07:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:48 np0005539482 systemd[1]: libpod-conmon-4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac.scope: Deactivated successfully.
Nov 29 00:07:48 np0005539482 podman[74400]: 2025-11-29 05:07:48.317360044 +0000 UTC m=+0.059489428 container create 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:07:48 np0005539482 systemd[1]: Started libpod-conmon-5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5.scope.
Nov 29 00:07:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:48 np0005539482 podman[74400]: 2025-11-29 05:07:48.388925191 +0000 UTC m=+0.131054645 container init 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:07:48 np0005539482 podman[74400]: 2025-11-29 05:07:48.298117123 +0000 UTC m=+0.040246497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:48 np0005539482 podman[74400]: 2025-11-29 05:07:48.398765727 +0000 UTC m=+0.140895131 container start 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:48 np0005539482 podman[74400]: 2025-11-29 05:07:48.40261747 +0000 UTC m=+0.144746874 container attach 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:07:48 np0005539482 interesting_ritchie[74416]: AQCkfyppOwUAGhAAGR45Br/xGAd+PzV1CBG2Rw==
Nov 29 00:07:48 np0005539482 systemd[1]: libpod-5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5.scope: Deactivated successfully.
Nov 29 00:07:48 np0005539482 podman[74400]: 2025-11-29 05:07:48.441475902 +0000 UTC m=+0.183605286 container died 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:07:48 np0005539482 podman[74400]: 2025-11-29 05:07:48.478695264 +0000 UTC m=+0.220824638 container remove 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:48 np0005539482 systemd[1]: libpod-conmon-5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5.scope: Deactivated successfully.
Nov 29 00:07:48 np0005539482 podman[74435]: 2025-11-29 05:07:48.529949874 +0000 UTC m=+0.034983930 container create 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 00:07:48 np0005539482 systemd[1]: Started libpod-conmon-5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d.scope.
Nov 29 00:07:48 np0005539482 podman[74435]: 2025-11-29 05:07:48.515002745 +0000 UTC m=+0.020036801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:48 np0005539482 podman[74435]: 2025-11-29 05:07:48.836087537 +0000 UTC m=+0.341121623 container init 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:07:48 np0005539482 podman[74435]: 2025-11-29 05:07:48.847499981 +0000 UTC m=+0.352534047 container start 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:48 np0005539482 podman[74435]: 2025-11-29 05:07:48.85162975 +0000 UTC m=+0.356663796 container attach 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:48 np0005539482 intelligent_ishizaka[74453]: AQCkfyppcoWXMxAASIrnLmFhI08U7xTuCVjxYw==
Nov 29 00:07:48 np0005539482 systemd[1]: libpod-5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d.scope: Deactivated successfully.
Nov 29 00:07:48 np0005539482 podman[74460]: 2025-11-29 05:07:48.907853529 +0000 UTC m=+0.027410129 container died 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 00:07:48 np0005539482 podman[74460]: 2025-11-29 05:07:48.955472841 +0000 UTC m=+0.075029361 container remove 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:07:48 np0005539482 systemd[1]: libpod-conmon-5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d.scope: Deactivated successfully.
Nov 29 00:07:49 np0005539482 podman[74475]: 2025-11-29 05:07:49.05301687 +0000 UTC m=+0.055475031 container create 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:07:49 np0005539482 systemd[1]: Started libpod-conmon-5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d.scope.
Nov 29 00:07:49 np0005539482 podman[74475]: 2025-11-29 05:07:49.024456826 +0000 UTC m=+0.026915077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:49 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e5c0cc34a574f8972b6c8f663a5b0a4dee18778790bf1392968a89a48e98efa/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:49 np0005539482 podman[74475]: 2025-11-29 05:07:49.136552674 +0000 UTC m=+0.139010855 container init 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:07:49 np0005539482 podman[74475]: 2025-11-29 05:07:49.143609873 +0000 UTC m=+0.146068025 container start 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:07:49 np0005539482 podman[74475]: 2025-11-29 05:07:49.146849811 +0000 UTC m=+0.149307962 container attach 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:49 np0005539482 hopeful_montalcini[74490]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 00:07:49 np0005539482 hopeful_montalcini[74490]: setting min_mon_release = pacific
Nov 29 00:07:49 np0005539482 hopeful_montalcini[74490]: /usr/bin/monmaptool: set fsid to 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:49 np0005539482 hopeful_montalcini[74490]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 00:07:49 np0005539482 systemd[1]: libpod-5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d.scope: Deactivated successfully.
Nov 29 00:07:49 np0005539482 podman[74475]: 2025-11-29 05:07:49.185362175 +0000 UTC m=+0.187820346 container died 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:07:49 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5e5c0cc34a574f8972b6c8f663a5b0a4dee18778790bf1392968a89a48e98efa-merged.mount: Deactivated successfully.
Nov 29 00:07:49 np0005539482 podman[74475]: 2025-11-29 05:07:49.226872321 +0000 UTC m=+0.229330482 container remove 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:07:49 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:49 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:49 np0005539482 systemd[1]: libpod-conmon-5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d.scope: Deactivated successfully.
Nov 29 00:07:49 np0005539482 podman[74509]: 2025-11-29 05:07:49.302159956 +0000 UTC m=+0.044976879 container create d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:07:49 np0005539482 systemd[1]: Started libpod-conmon-d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464.scope.
Nov 29 00:07:49 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:49 np0005539482 podman[74509]: 2025-11-29 05:07:49.376186492 +0000 UTC m=+0.119003435 container init d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:07:49 np0005539482 podman[74509]: 2025-11-29 05:07:49.281461701 +0000 UTC m=+0.024278634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:49 np0005539482 podman[74509]: 2025-11-29 05:07:49.384962012 +0000 UTC m=+0.127778925 container start d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:07:49 np0005539482 podman[74509]: 2025-11-29 05:07:49.388852966 +0000 UTC m=+0.131669909 container attach d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:07:49 np0005539482 systemd[1]: libpod-d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464.scope: Deactivated successfully.
Nov 29 00:07:49 np0005539482 podman[74509]: 2025-11-29 05:07:49.490572056 +0000 UTC m=+0.233389009 container died d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:07:49 np0005539482 podman[74509]: 2025-11-29 05:07:49.536021006 +0000 UTC m=+0.278837949 container remove d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:07:49 np0005539482 systemd[1]: libpod-conmon-d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464.scope: Deactivated successfully.
Nov 29 00:07:49 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:49 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:49 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:49 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:49 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:49 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:50 np0005539482 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 00:07:50 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:50 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:50 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:50 np0005539482 systemd[1]: Reached target Ceph cluster 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:07:50 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:50 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:50 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:50 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:50 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:50 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:50 np0005539482 systemd[1]: Created slice Slice /system/ceph-93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:07:50 np0005539482 systemd[1]: Reached target System Time Set.
Nov 29 00:07:50 np0005539482 systemd[1]: Reached target System Time Synchronized.
Nov 29 00:07:50 np0005539482 systemd[1]: Starting Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:07:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:51 np0005539482 podman[74804]: 2025-11-29 05:07:51.170846731 +0000 UTC m=+0.040949843 container create 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:07:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:51 np0005539482 podman[74804]: 2025-11-29 05:07:51.233599936 +0000 UTC m=+0.103703068 container init 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:51 np0005539482 podman[74804]: 2025-11-29 05:07:51.246287491 +0000 UTC m=+0.116390603 container start 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:07:51 np0005539482 bash[74804]: 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16
Nov 29 00:07:51 np0005539482 podman[74804]: 2025-11-29 05:07:51.155258127 +0000 UTC m=+0.025361249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:51 np0005539482 systemd[1]: Started Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: pidfile_write: ignore empty --pid-file
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: load: jerasure load: lrc 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Git sha 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: DB SUMMARY
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: DB Session ID:  6W04Q5N79TYXB507NAYJ
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                                     Options.env: 0x56082db8dc40
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                                Options.info_log: 0x56082ff78e80
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                                 Options.wal_dir: 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                    Options.write_buffer_manager: 0x56082ff88b40
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                               Options.row_cache: None
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                              Options.wal_filter: None
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.wal_compression: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.max_background_jobs: 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Compression algorithms supported:
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kZSTD supported: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:           Options.merge_operator: 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:        Options.compaction_filter: None
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56082ff78a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56082ff711f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:          Options.compression: NoCompression
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.num_levels: 7
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7a482e8-4a7b-461a-a1cb-36d637653226
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392871307446, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392871309430, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "6W04Q5N79TYXB507NAYJ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392871309564, "job": 1, "event": "recovery_finished"}
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56082ff9ae00
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: DB pointer 0x560830024000
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56082ff711f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@-1(???) e0 preinit fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 00:07:51 np0005539482 podman[74824]: 2025-11-29 05:07:51.33633784 +0000 UTC m=+0.048860793 container create 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T05:07:49.437560Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).mds e1 new map
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mkfs 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 00:07:51 np0005539482 systemd[1]: Started libpod-conmon-7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef.scope.
Nov 29 00:07:51 np0005539482 podman[74824]: 2025-11-29 05:07:51.312504729 +0000 UTC m=+0.025027692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:51 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:51 np0005539482 podman[74824]: 2025-11-29 05:07:51.458121982 +0000 UTC m=+0.170644935 container init 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:07:51 np0005539482 podman[74824]: 2025-11-29 05:07:51.471434521 +0000 UTC m=+0.183957464 container start 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:07:51 np0005539482 podman[74824]: 2025-11-29 05:07:51.474914564 +0000 UTC m=+0.187437517 container attach 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 00:07:51 np0005539482 ceph-mon[74823]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/900733589' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:  cluster:
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    id:     93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    health: HEALTH_OK
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]: 
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:  services:
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    mon: 1 daemons, quorum compute-0 (age 0.561197s)
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    mgr: no daemons active
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    osd: 0 osds: 0 up, 0 in
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]: 
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:  data:
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    pools:   0 pools, 0 pgs
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    objects: 0 objects, 0 B
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    usage:   0 B used, 0 B / 0 B avail
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]:    pgs:     
Nov 29 00:07:51 np0005539482 goofy_ganguly[74879]: 
Nov 29 00:07:51 np0005539482 systemd[1]: libpod-7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef.scope: Deactivated successfully.
Nov 29 00:07:51 np0005539482 podman[74824]: 2025-11-29 05:07:51.920895433 +0000 UTC m=+0.633418416 container died 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:07:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay-44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b-merged.mount: Deactivated successfully.
Nov 29 00:07:51 np0005539482 podman[74824]: 2025-11-29 05:07:51.961470316 +0000 UTC m=+0.673993259 container remove 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:07:51 np0005539482 systemd[1]: libpod-conmon-7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef.scope: Deactivated successfully.
Nov 29 00:07:52 np0005539482 podman[74915]: 2025-11-29 05:07:52.021711031 +0000 UTC m=+0.039907229 container create ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 00:07:52 np0005539482 systemd[1]: Started libpod-conmon-ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e.scope.
Nov 29 00:07:52 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 podman[74915]: 2025-11-29 05:07:52.004254682 +0000 UTC m=+0.022450860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:52 np0005539482 podman[74915]: 2025-11-29 05:07:52.105694825 +0000 UTC m=+0.123891013 container init ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:07:52 np0005539482 podman[74915]: 2025-11-29 05:07:52.112875927 +0000 UTC m=+0.131072105 container start ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:07:52 np0005539482 podman[74915]: 2025-11-29 05:07:52.117678342 +0000 UTC m=+0.135874510 container attach ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:52 np0005539482 ceph-mon[74823]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 00:07:52 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 00:07:52 np0005539482 ceph-mon[74823]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672485794' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 00:07:52 np0005539482 ceph-mon[74823]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672485794' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 00:07:52 np0005539482 cool_cannon[74932]: 
Nov 29 00:07:52 np0005539482 cool_cannon[74932]: [global]
Nov 29 00:07:52 np0005539482 cool_cannon[74932]: #011fsid = 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:52 np0005539482 cool_cannon[74932]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 00:07:52 np0005539482 cool_cannon[74932]: #011osd_crush_chooseleaf_type = 0
Nov 29 00:07:52 np0005539482 systemd[1]: libpod-ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e.scope: Deactivated successfully.
Nov 29 00:07:52 np0005539482 podman[74959]: 2025-11-29 05:07:52.564416608 +0000 UTC m=+0.038624907 container died ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 00:07:52 np0005539482 systemd[1]: var-lib-containers-storage-overlay-fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8-merged.mount: Deactivated successfully.
Nov 29 00:07:52 np0005539482 podman[74959]: 2025-11-29 05:07:52.604157492 +0000 UTC m=+0.078365781 container remove ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 00:07:52 np0005539482 systemd[1]: libpod-conmon-ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e.scope: Deactivated successfully.
Nov 29 00:07:52 np0005539482 podman[74974]: 2025-11-29 05:07:52.694110849 +0000 UTC m=+0.056889696 container create 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:52 np0005539482 systemd[1]: Started libpod-conmon-0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b.scope.
Nov 29 00:07:52 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:52 np0005539482 podman[74974]: 2025-11-29 05:07:52.752785367 +0000 UTC m=+0.115564234 container init 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:07:52 np0005539482 podman[74974]: 2025-11-29 05:07:52.761135967 +0000 UTC m=+0.123914834 container start 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:52 np0005539482 podman[74974]: 2025-11-29 05:07:52.764818975 +0000 UTC m=+0.127597872 container attach 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:07:52 np0005539482 podman[74974]: 2025-11-29 05:07:52.6774777 +0000 UTC m=+0.040256567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:53 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:07:53 np0005539482 ceph-mon[74823]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495515744' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:07:53 np0005539482 systemd[1]: libpod-0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b.scope: Deactivated successfully.
Nov 29 00:07:53 np0005539482 podman[74974]: 2025-11-29 05:07:53.142072385 +0000 UTC m=+0.504851262 container died 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:07:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317-merged.mount: Deactivated successfully.
Nov 29 00:07:53 np0005539482 podman[74974]: 2025-11-29 05:07:53.175522597 +0000 UTC m=+0.538301444 container remove 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:07:53 np0005539482 systemd[1]: libpod-conmon-0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b.scope: Deactivated successfully.
Nov 29 00:07:53 np0005539482 systemd[1]: Stopping Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:07:53 np0005539482 ceph-mon[74823]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 00:07:53 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 00:07:53 np0005539482 ceph-mon[74823]: mon.compute-0@0(leader) e1 shutdown
Nov 29 00:07:53 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0[74819]: 2025-11-29T05:07:53.359+0000 7f1947c8b640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 00:07:53 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0[74819]: 2025-11-29T05:07:53.359+0000 7f1947c8b640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 00:07:53 np0005539482 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 00:07:53 np0005539482 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 00:07:53 np0005539482 podman[75056]: 2025-11-29 05:07:53.554248041 +0000 UTC m=+0.232094738 container died 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:07:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5-merged.mount: Deactivated successfully.
Nov 29 00:07:53 np0005539482 podman[75056]: 2025-11-29 05:07:53.589331363 +0000 UTC m=+0.267178030 container remove 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:53 np0005539482 bash[75056]: ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0
Nov 29 00:07:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 00:07:53 np0005539482 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mon.compute-0.service: Deactivated successfully.
Nov 29 00:07:53 np0005539482 systemd[1]: Stopped Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:07:53 np0005539482 systemd[1]: Starting Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:07:53 np0005539482 podman[75159]: 2025-11-29 05:07:53.921025899 +0000 UTC m=+0.036398954 container create 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:07:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:53 np0005539482 podman[75159]: 2025-11-29 05:07:53.976993092 +0000 UTC m=+0.092366197 container init 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:07:53 np0005539482 podman[75159]: 2025-11-29 05:07:53.983806625 +0000 UTC m=+0.099179690 container start 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:53 np0005539482 bash[75159]: 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113
Nov 29 00:07:53 np0005539482 podman[75159]: 2025-11-29 05:07:53.904867252 +0000 UTC m=+0.020240327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:53 np0005539482 systemd[1]: Started Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: pidfile_write: ignore empty --pid-file
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: load: jerasure load: lrc 
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Git sha 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: DB SUMMARY
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: DB Session ID:  HDG9CTZH3D8UGVBA5ZVT
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52074 ; 
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                                     Options.env: 0x556a61cc2c40
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                                Options.info_log: 0x556a62a2f040
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                                 Options.wal_dir: 
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                    Options.write_buffer_manager: 0x556a62a3eb40
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                               Options.row_cache: None
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                              Options.wal_filter: None
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.wal_compression: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.max_background_jobs: 2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Compression algorithms supported:
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kZSTD supported: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:           Options.merge_operator: 
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:        Options.compaction_filter: None
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556a62a2ec40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556a62a271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:          Options.compression: NoCompression
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.num_levels: 7
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7a482e8-4a7b-461a-a1cb-36d637653226
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392874024619, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392874027212, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 129, "table_properties": {"data_size": 50347, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2940, "raw_average_key_size": 30, "raw_value_size": 48026, "raw_average_value_size": 500, "num_data_blocks": 7, "num_entries": 96, "num_filter_entries": 96, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392874027317, "job": 1, "event": "recovery_finished"}
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556a62a50e00
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: DB pointer 0x556a62ada000
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   52.47 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0   52.47 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 4.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 4.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???) e1 preinit fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???).mds e1 new map
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 00:07:54 np0005539482 podman[75177]: 2025-11-29 05:07:54.059974322 +0000 UTC m=+0.046795583 container create 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:07:54 np0005539482 systemd[1]: Started libpod-conmon-7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260.scope.
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 00:07:54 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:54 np0005539482 podman[75177]: 2025-11-29 05:07:54.034794958 +0000 UTC m=+0.021616239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:54 np0005539482 podman[75177]: 2025-11-29 05:07:54.143434924 +0000 UTC m=+0.130256215 container init 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:07:54 np0005539482 podman[75177]: 2025-11-29 05:07:54.150422642 +0000 UTC m=+0.137243903 container start 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:07:54 np0005539482 podman[75177]: 2025-11-29 05:07:54.153333841 +0000 UTC m=+0.140155102 container attach 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 00:07:54 np0005539482 systemd[1]: libpod-7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260.scope: Deactivated successfully.
Nov 29 00:07:54 np0005539482 podman[75177]: 2025-11-29 05:07:54.577763952 +0000 UTC m=+0.564585213 container died 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 00:07:54 np0005539482 systemd[1]: var-lib-containers-storage-overlay-117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108-merged.mount: Deactivated successfully.
Nov 29 00:07:54 np0005539482 podman[75177]: 2025-11-29 05:07:54.630713972 +0000 UTC m=+0.617535233 container remove 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:07:54 np0005539482 systemd[1]: libpod-conmon-7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260.scope: Deactivated successfully.
Nov 29 00:07:54 np0005539482 podman[75272]: 2025-11-29 05:07:54.697530355 +0000 UTC m=+0.045871971 container create ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:07:54 np0005539482 systemd[1]: Started libpod-conmon-ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6.scope.
Nov 29 00:07:54 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:54 np0005539482 podman[75272]: 2025-11-29 05:07:54.67644092 +0000 UTC m=+0.024782566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:54 np0005539482 podman[75272]: 2025-11-29 05:07:54.782981205 +0000 UTC m=+0.131322871 container init ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:07:54 np0005539482 podman[75272]: 2025-11-29 05:07:54.788770114 +0000 UTC m=+0.137111740 container start ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:07:54 np0005539482 podman[75272]: 2025-11-29 05:07:54.792734319 +0000 UTC m=+0.141075935 container attach ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:07:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 00:07:55 np0005539482 systemd[1]: libpod-ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6.scope: Deactivated successfully.
Nov 29 00:07:55 np0005539482 podman[75272]: 2025-11-29 05:07:55.218481601 +0000 UTC m=+0.566823217 container died ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:55 np0005539482 systemd[1]: var-lib-containers-storage-overlay-588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703-merged.mount: Deactivated successfully.
Nov 29 00:07:55 np0005539482 podman[75272]: 2025-11-29 05:07:55.262237321 +0000 UTC m=+0.610578927 container remove ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:07:55 np0005539482 systemd[1]: libpod-conmon-ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6.scope: Deactivated successfully.
Nov 29 00:07:55 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:55 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:55 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:55 np0005539482 systemd[1]: Reloading.
Nov 29 00:07:55 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:07:55 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:07:55 np0005539482 systemd[1]: Starting Ceph mgr.compute-0.csskcz for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:07:56 np0005539482 podman[75453]: 2025-11-29 05:07:56.066888742 +0000 UTC m=+0.048900424 container create 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:07:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/var/lib/ceph/mgr/ceph-compute-0.csskcz supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:56 np0005539482 podman[75453]: 2025-11-29 05:07:56.116679846 +0000 UTC m=+0.098691528 container init 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:07:56 np0005539482 podman[75453]: 2025-11-29 05:07:56.125473857 +0000 UTC m=+0.107485509 container start 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:07:56 np0005539482 bash[75453]: 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2
Nov 29 00:07:56 np0005539482 podman[75453]: 2025-11-29 05:07:56.042426855 +0000 UTC m=+0.024438607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:56 np0005539482 systemd[1]: Started Ceph mgr.compute-0.csskcz for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: pidfile_write: ignore empty --pid-file
Nov 29 00:07:56 np0005539482 podman[75474]: 2025-11-29 05:07:56.228433777 +0000 UTC m=+0.058334301 container create dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:07:56 np0005539482 systemd[1]: Started libpod-conmon-dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde.scope.
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'alerts'
Nov 29 00:07:56 np0005539482 podman[75474]: 2025-11-29 05:07:56.208337175 +0000 UTC m=+0.038237709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:56 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:56 np0005539482 podman[75474]: 2025-11-29 05:07:56.336182521 +0000 UTC m=+0.166083055 container init dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:07:56 np0005539482 podman[75474]: 2025-11-29 05:07:56.344657194 +0000 UTC m=+0.174557718 container start dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:07:56 np0005539482 podman[75474]: 2025-11-29 05:07:56.350395592 +0000 UTC m=+0.180296096 container attach dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'balancer'
Nov 29 00:07:56 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:07:56.589+0000 7f55e947f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 00:07:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:07:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066047919' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]: 
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]: {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "health": {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "status": "HEALTH_OK",
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "checks": {},
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "mutes": []
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    },
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "election_epoch": 5,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "quorum": [
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        0
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    ],
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "quorum_names": [
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "compute-0"
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    ],
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "quorum_age": 2,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "monmap": {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "epoch": 1,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "min_mon_release_name": "reef",
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_mons": 1
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    },
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "osdmap": {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "epoch": 1,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_osds": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_up_osds": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "osd_up_since": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_in_osds": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "osd_in_since": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_remapped_pgs": 0
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    },
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "pgmap": {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "pgs_by_state": [],
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_pgs": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_pools": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_objects": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "data_bytes": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "bytes_used": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "bytes_avail": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "bytes_total": 0
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    },
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "fsmap": {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "epoch": 1,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "by_rank": [],
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "up:standby": 0
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    },
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "mgrmap": {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "available": false,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "num_standbys": 0,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "modules": [
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:            "iostat",
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:            "nfs",
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:            "restful"
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        ],
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "services": {}
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    },
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "servicemap": {
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "epoch": 1,
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:        "services": {}
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    },
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]:    "progress_events": {}
Nov 29 00:07:56 np0005539482 stoic_hugle[75514]: }
Nov 29 00:07:56 np0005539482 systemd[1]: libpod-dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde.scope: Deactivated successfully.
Nov 29 00:07:56 np0005539482 podman[75540]: 2025-11-29 05:07:56.793777668 +0000 UTC m=+0.022315177 container died dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:07:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25-merged.mount: Deactivated successfully.
Nov 29 00:07:56 np0005539482 podman[75540]: 2025-11-29 05:07:56.838990812 +0000 UTC m=+0.067528301 container remove dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:07:56 np0005539482 systemd[1]: libpod-conmon-dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde.scope: Deactivated successfully.
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 00:07:56 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'cephadm'
Nov 29 00:07:56 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:07:56.875+0000 7f55e947f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 00:07:58 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'crash'
Nov 29 00:07:58 np0005539482 podman[75565]: 2025-11-29 05:07:58.909441166 +0000 UTC m=+0.037027269 container create a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:07:58 np0005539482 systemd[1]: Started libpod-conmon-a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c.scope.
Nov 29 00:07:58 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:07:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:07:58 np0005539482 podman[75565]: 2025-11-29 05:07:58.978613065 +0000 UTC m=+0.106199188 container init a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:07:58 np0005539482 podman[75565]: 2025-11-29 05:07:58.983899262 +0000 UTC m=+0.111485365 container start a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:07:58 np0005539482 podman[75565]: 2025-11-29 05:07:58.98673373 +0000 UTC m=+0.114319853 container attach a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:07:58 np0005539482 podman[75565]: 2025-11-29 05:07:58.893343089 +0000 UTC m=+0.020929212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:07:59 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:07:59.066+0000 7f55e947f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 00:07:59 np0005539482 ceph-mgr[75473]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 00:07:59 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'dashboard'
Nov 29 00:07:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:07:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812151667' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:07:59 np0005539482 competent_mendel[75581]: 
Nov 29 00:07:59 np0005539482 competent_mendel[75581]: {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "health": {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "status": "HEALTH_OK",
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "checks": {},
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "mutes": []
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    },
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "election_epoch": 5,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "quorum": [
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        0
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    ],
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "quorum_names": [
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "compute-0"
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    ],
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "quorum_age": 5,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "monmap": {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "epoch": 1,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "min_mon_release_name": "reef",
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_mons": 1
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    },
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "osdmap": {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "epoch": 1,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_osds": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_up_osds": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "osd_up_since": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_in_osds": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "osd_in_since": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_remapped_pgs": 0
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    },
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "pgmap": {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "pgs_by_state": [],
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_pgs": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_pools": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_objects": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "data_bytes": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "bytes_used": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "bytes_avail": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "bytes_total": 0
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    },
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "fsmap": {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "epoch": 1,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "by_rank": [],
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "up:standby": 0
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    },
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "mgrmap": {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "available": false,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "num_standbys": 0,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "modules": [
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:            "iostat",
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:            "nfs",
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:            "restful"
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        ],
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "services": {}
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    },
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "servicemap": {
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "epoch": 1,
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:        "services": {}
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    },
Nov 29 00:07:59 np0005539482 competent_mendel[75581]:    "progress_events": {}
Nov 29 00:07:59 np0005539482 competent_mendel[75581]: }
Nov 29 00:07:59 np0005539482 systemd[1]: libpod-a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c.scope: Deactivated successfully.
Nov 29 00:07:59 np0005539482 podman[75565]: 2025-11-29 05:07:59.366498229 +0000 UTC m=+0.494084342 container died a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:07:59 np0005539482 systemd[1]: var-lib-containers-storage-overlay-187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c-merged.mount: Deactivated successfully.
Nov 29 00:07:59 np0005539482 podman[75565]: 2025-11-29 05:07:59.410889104 +0000 UTC m=+0.538475207 container remove a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:07:59 np0005539482 systemd[1]: libpod-conmon-a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c.scope: Deactivated successfully.
Nov 29 00:08:00 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'devicehealth'
Nov 29 00:08:00 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:00.682+0000 7f55e947f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 00:08:00 np0005539482 ceph-mgr[75473]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 00:08:00 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 00:08:01 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 00:08:01 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 00:08:01 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]:  from numpy import show_config as show_numpy_config
Nov 29 00:08:01 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:01.191+0000 7f55e947f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 00:08:01 np0005539482 ceph-mgr[75473]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 00:08:01 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'influx'
Nov 29 00:08:01 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:01.439+0000 7f55e947f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 00:08:01 np0005539482 ceph-mgr[75473]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 00:08:01 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'insights'
Nov 29 00:08:01 np0005539482 podman[75621]: 2025-11-29 05:08:01.488015058 +0000 UTC m=+0.053434963 container create abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:08:01 np0005539482 systemd[1]: Started libpod-conmon-abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a.scope.
Nov 29 00:08:01 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:01 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:01 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:01 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:01 np0005539482 podman[75621]: 2025-11-29 05:08:01.562757481 +0000 UTC m=+0.128177386 container init abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:08:01 np0005539482 podman[75621]: 2025-11-29 05:08:01.466876391 +0000 UTC m=+0.032296316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:01 np0005539482 podman[75621]: 2025-11-29 05:08:01.571616003 +0000 UTC m=+0.137035918 container start abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:08:01 np0005539482 podman[75621]: 2025-11-29 05:08:01.575434335 +0000 UTC m=+0.140854260 container attach abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:01 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'iostat'
Nov 29 00:08:01 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:01.917+0000 7f55e947f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 00:08:01 np0005539482 ceph-mgr[75473]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 00:08:01 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'k8sevents'
Nov 29 00:08:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:08:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785862208' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]: 
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]: {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "health": {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "status": "HEALTH_OK",
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "checks": {},
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "mutes": []
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    },
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "election_epoch": 5,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "quorum": [
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        0
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    ],
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "quorum_names": [
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "compute-0"
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    ],
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "quorum_age": 7,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "monmap": {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "epoch": 1,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "min_mon_release_name": "reef",
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_mons": 1
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    },
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "osdmap": {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "epoch": 1,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_osds": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_up_osds": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "osd_up_since": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_in_osds": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "osd_in_since": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_remapped_pgs": 0
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    },
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "pgmap": {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "pgs_by_state": [],
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_pgs": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_pools": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_objects": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "data_bytes": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "bytes_used": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "bytes_avail": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "bytes_total": 0
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    },
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "fsmap": {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "epoch": 1,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "by_rank": [],
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "up:standby": 0
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    },
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "mgrmap": {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "available": false,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "num_standbys": 0,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "modules": [
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:            "iostat",
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:            "nfs",
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:            "restful"
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        ],
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "services": {}
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    },
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "servicemap": {
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "epoch": 1,
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:        "services": {}
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    },
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]:    "progress_events": {}
Nov 29 00:08:01 np0005539482 goofy_hamilton[75638]: }
Nov 29 00:08:01 np0005539482 systemd[1]: libpod-abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a.scope: Deactivated successfully.
Nov 29 00:08:01 np0005539482 podman[75621]: 2025-11-29 05:08:01.987098389 +0000 UTC m=+0.552518334 container died abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:08:02 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c-merged.mount: Deactivated successfully.
Nov 29 00:08:02 np0005539482 podman[75621]: 2025-11-29 05:08:02.031656488 +0000 UTC m=+0.597076403 container remove abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:08:02 np0005539482 systemd[1]: libpod-conmon-abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a.scope: Deactivated successfully.
Nov 29 00:08:03 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'localpool'
Nov 29 00:08:03 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 00:08:04 np0005539482 podman[75678]: 2025-11-29 05:08:04.099300555 +0000 UTC m=+0.043427083 container create dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:04 np0005539482 systemd[1]: Started libpod-conmon-dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619.scope.
Nov 29 00:08:04 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:04 np0005539482 podman[75678]: 2025-11-29 05:08:04.076313173 +0000 UTC m=+0.020439731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:04 np0005539482 podman[75678]: 2025-11-29 05:08:04.176237531 +0000 UTC m=+0.120364129 container init dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:08:04 np0005539482 podman[75678]: 2025-11-29 05:08:04.183123576 +0000 UTC m=+0.127250104 container start dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:08:04 np0005539482 podman[75678]: 2025-11-29 05:08:04.186757862 +0000 UTC m=+0.130884480 container attach dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:08:04 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'mirroring'
Nov 29 00:08:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:08:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502031567' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]: 
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]: {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "health": {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "status": "HEALTH_OK",
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "checks": {},
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "mutes": []
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    },
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "election_epoch": 5,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "quorum": [
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        0
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    ],
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "quorum_names": [
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "compute-0"
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    ],
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "quorum_age": 10,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "monmap": {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "epoch": 1,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "min_mon_release_name": "reef",
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_mons": 1
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    },
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "osdmap": {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "epoch": 1,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_osds": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_up_osds": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "osd_up_since": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_in_osds": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "osd_in_since": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_remapped_pgs": 0
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    },
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "pgmap": {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "pgs_by_state": [],
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_pgs": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_pools": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_objects": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "data_bytes": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "bytes_used": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "bytes_avail": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "bytes_total": 0
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    },
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "fsmap": {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "epoch": 1,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "by_rank": [],
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "up:standby": 0
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    },
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "mgrmap": {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "available": false,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "num_standbys": 0,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "modules": [
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:            "iostat",
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:            "nfs",
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:            "restful"
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        ],
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "services": {}
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    },
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "servicemap": {
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "epoch": 1,
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:        "services": {}
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    },
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]:    "progress_events": {}
Nov 29 00:08:04 np0005539482 crazy_lovelace[75695]: }
Nov 29 00:08:04 np0005539482 systemd[1]: libpod-dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619.scope: Deactivated successfully.
Nov 29 00:08:04 np0005539482 podman[75678]: 2025-11-29 05:08:04.579047382 +0000 UTC m=+0.523173910 container died dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:08:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23-merged.mount: Deactivated successfully.
Nov 29 00:08:04 np0005539482 podman[75678]: 2025-11-29 05:08:04.618072718 +0000 UTC m=+0.562199246 container remove dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 00:08:04 np0005539482 systemd[1]: libpod-conmon-dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619.scope: Deactivated successfully.
Nov 29 00:08:04 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'nfs'
Nov 29 00:08:05 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:05.519+0000 7f55e947f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 00:08:05 np0005539482 ceph-mgr[75473]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 00:08:05 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'orchestrator'
Nov 29 00:08:06 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:06.226+0000 7f55e947f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:06 np0005539482 ceph-mgr[75473]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:06 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 00:08:06 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:06.493+0000 7f55e947f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 00:08:06 np0005539482 ceph-mgr[75473]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 00:08:06 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'osd_support'
Nov 29 00:08:06 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:06.719+0000 7f55e947f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 00:08:06 np0005539482 ceph-mgr[75473]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 00:08:06 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 00:08:06 np0005539482 podman[75735]: 2025-11-29 05:08:06.665803758 +0000 UTC m=+0.022225645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:06 np0005539482 podman[75735]: 2025-11-29 05:08:06.769606117 +0000 UTC m=+0.126027994 container create 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:08:06 np0005539482 systemd[1]: Started libpod-conmon-27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7.scope.
Nov 29 00:08:06 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:06 np0005539482 podman[75735]: 2025-11-29 05:08:06.831467951 +0000 UTC m=+0.187889868 container init 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:06 np0005539482 podman[75735]: 2025-11-29 05:08:06.838191653 +0000 UTC m=+0.194613530 container start 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:08:06 np0005539482 podman[75735]: 2025-11-29 05:08:06.84224914 +0000 UTC m=+0.198671017 container attach 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:07.014+0000 7f55e947f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 00:08:07 np0005539482 ceph-mgr[75473]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 00:08:07 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'progress'
Nov 29 00:08:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:08:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971488717' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]: 
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]: {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "health": {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "status": "HEALTH_OK",
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "checks": {},
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "mutes": []
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    },
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "election_epoch": 5,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "quorum": [
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        0
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    ],
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "quorum_names": [
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "compute-0"
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    ],
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "quorum_age": 13,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "monmap": {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "epoch": 1,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "min_mon_release_name": "reef",
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_mons": 1
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    },
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "osdmap": {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "epoch": 1,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_osds": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_up_osds": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "osd_up_since": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_in_osds": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "osd_in_since": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_remapped_pgs": 0
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    },
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "pgmap": {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "pgs_by_state": [],
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_pgs": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_pools": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_objects": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "data_bytes": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "bytes_used": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "bytes_avail": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "bytes_total": 0
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    },
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "fsmap": {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "epoch": 1,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "by_rank": [],
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "up:standby": 0
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    },
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "mgrmap": {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "available": false,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "num_standbys": 0,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "modules": [
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:            "iostat",
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:            "nfs",
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:            "restful"
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        ],
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "services": {}
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    },
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "servicemap": {
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "epoch": 1,
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:        "services": {}
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    },
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]:    "progress_events": {}
Nov 29 00:08:07 np0005539482 optimistic_hypatia[75751]: }
Nov 29 00:08:07 np0005539482 systemd[1]: libpod-27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7.scope: Deactivated successfully.
Nov 29 00:08:07 np0005539482 podman[75735]: 2025-11-29 05:08:07.242597763 +0000 UTC m=+0.599019630 container died 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:08:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:07.262+0000 7f55e947f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 00:08:07 np0005539482 ceph-mgr[75473]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 00:08:07 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'prometheus'
Nov 29 00:08:07 np0005539482 systemd[1]: var-lib-containers-storage-overlay-823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600-merged.mount: Deactivated successfully.
Nov 29 00:08:07 np0005539482 podman[75735]: 2025-11-29 05:08:07.289493798 +0000 UTC m=+0.645915665 container remove 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:07 np0005539482 systemd[1]: libpod-conmon-27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7.scope: Deactivated successfully.
Nov 29 00:08:08 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:08.236+0000 7f55e947f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 00:08:08 np0005539482 ceph-mgr[75473]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 00:08:08 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'rbd_support'
Nov 29 00:08:08 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:08.516+0000 7f55e947f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 00:08:08 np0005539482 ceph-mgr[75473]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 00:08:08 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'restful'
Nov 29 00:08:09 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'rgw'
Nov 29 00:08:09 np0005539482 podman[75789]: 2025-11-29 05:08:09.332591755 +0000 UTC m=+0.019975980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:09 np0005539482 podman[75789]: 2025-11-29 05:08:09.531434355 +0000 UTC m=+0.218818540 container create fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:08:09 np0005539482 systemd[1]: Started libpod-conmon-fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b.scope.
Nov 29 00:08:09 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:09 np0005539482 podman[75789]: 2025-11-29 05:08:09.892573887 +0000 UTC m=+0.579958112 container init fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:08:09 np0005539482 podman[75789]: 2025-11-29 05:08:09.900434966 +0000 UTC m=+0.587819171 container start fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:08:09 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:09.901+0000 7f55e947f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 00:08:09 np0005539482 ceph-mgr[75473]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 00:08:09 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'rook'
Nov 29 00:08:09 np0005539482 podman[75789]: 2025-11-29 05:08:09.904513584 +0000 UTC m=+0.591897799 container attach fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:08:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400864301' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]: 
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]: {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "health": {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "status": "HEALTH_OK",
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "checks": {},
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "mutes": []
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    },
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "election_epoch": 5,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "quorum": [
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        0
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    ],
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "quorum_names": [
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "compute-0"
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    ],
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "quorum_age": 16,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "monmap": {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "epoch": 1,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "min_mon_release_name": "reef",
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_mons": 1
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    },
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "osdmap": {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "epoch": 1,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_osds": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_up_osds": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "osd_up_since": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_in_osds": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "osd_in_since": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_remapped_pgs": 0
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    },
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "pgmap": {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "pgs_by_state": [],
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_pgs": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_pools": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_objects": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "data_bytes": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "bytes_used": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "bytes_avail": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "bytes_total": 0
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    },
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "fsmap": {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "epoch": 1,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "by_rank": [],
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "up:standby": 0
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    },
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "mgrmap": {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "available": false,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "num_standbys": 0,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "modules": [
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:            "iostat",
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:            "nfs",
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:            "restful"
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        ],
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "services": {}
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    },
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "servicemap": {
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "epoch": 1,
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:        "services": {}
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    },
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]:    "progress_events": {}
Nov 29 00:08:10 np0005539482 wizardly_germain[75805]: }
Nov 29 00:08:10 np0005539482 systemd[1]: libpod-fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b.scope: Deactivated successfully.
Nov 29 00:08:10 np0005539482 podman[75789]: 2025-11-29 05:08:10.295573834 +0000 UTC m=+0.982958069 container died fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:10 np0005539482 systemd[1]: var-lib-containers-storage-overlay-32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2-merged.mount: Deactivated successfully.
Nov 29 00:08:10 np0005539482 podman[75789]: 2025-11-29 05:08:10.365588344 +0000 UTC m=+1.052972579 container remove fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:08:10 np0005539482 systemd[1]: libpod-conmon-fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b.scope: Deactivated successfully.
Nov 29 00:08:11 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:11.956+0000 7f55e947f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 00:08:11 np0005539482 ceph-mgr[75473]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 00:08:11 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'selftest'
Nov 29 00:08:12 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:12.206+0000 7f55e947f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 00:08:12 np0005539482 ceph-mgr[75473]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 00:08:12 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'snap_schedule'
Nov 29 00:08:12 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:12.453+0000 7f55e947f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 00:08:12 np0005539482 ceph-mgr[75473]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 00:08:12 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'stats'
Nov 29 00:08:12 np0005539482 podman[75844]: 2025-11-29 05:08:12.435216958 +0000 UTC m=+0.034831957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:12 np0005539482 podman[75844]: 2025-11-29 05:08:12.650483212 +0000 UTC m=+0.250098131 container create a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:08:12 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'status'
Nov 29 00:08:12 np0005539482 systemd[1]: Started libpod-conmon-a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554.scope.
Nov 29 00:08:12 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:12 np0005539482 podman[75844]: 2025-11-29 05:08:12.74005226 +0000 UTC m=+0.339667189 container init a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:12 np0005539482 podman[75844]: 2025-11-29 05:08:12.745666605 +0000 UTC m=+0.345281534 container start a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:12 np0005539482 podman[75844]: 2025-11-29 05:08:12.750013189 +0000 UTC m=+0.349628138 container attach a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:08:12 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:12.956+0000 7f55e947f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 00:08:12 np0005539482 ceph-mgr[75473]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 00:08:12 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'telegraf'
Nov 29 00:08:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:08:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929787841' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]: 
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]: {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "health": {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "status": "HEALTH_OK",
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "checks": {},
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "mutes": []
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    },
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "election_epoch": 5,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "quorum": [
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        0
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    ],
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "quorum_names": [
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "compute-0"
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    ],
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "quorum_age": 19,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "monmap": {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "epoch": 1,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "min_mon_release_name": "reef",
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_mons": 1
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    },
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "osdmap": {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "epoch": 1,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_osds": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_up_osds": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "osd_up_since": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_in_osds": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "osd_in_since": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_remapped_pgs": 0
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    },
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "pgmap": {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "pgs_by_state": [],
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_pgs": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_pools": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_objects": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "data_bytes": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "bytes_used": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "bytes_avail": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "bytes_total": 0
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    },
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "fsmap": {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "epoch": 1,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "by_rank": [],
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "up:standby": 0
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    },
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "mgrmap": {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "available": false,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "num_standbys": 0,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "modules": [
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:            "iostat",
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:            "nfs",
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:            "restful"
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        ],
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "services": {}
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    },
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "servicemap": {
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "epoch": 1,
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:        "services": {}
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    },
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]:    "progress_events": {}
Nov 29 00:08:13 np0005539482 vigorous_almeida[75861]: }
Nov 29 00:08:13 np0005539482 systemd[1]: libpod-a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554.scope: Deactivated successfully.
Nov 29 00:08:13 np0005539482 podman[75844]: 2025-11-29 05:08:13.156082249 +0000 UTC m=+0.755697248 container died a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:08:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:13.193+0000 7f55e947f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 00:08:13 np0005539482 ceph-mgr[75473]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 00:08:13 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'telemetry'
Nov 29 00:08:13 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c-merged.mount: Deactivated successfully.
Nov 29 00:08:13 np0005539482 podman[75844]: 2025-11-29 05:08:13.228510917 +0000 UTC m=+0.828125876 container remove a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:08:13 np0005539482 systemd[1]: libpod-conmon-a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554.scope: Deactivated successfully.
Nov 29 00:08:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:13.780+0000 7f55e947f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 00:08:13 np0005539482 ceph-mgr[75473]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 00:08:13 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 00:08:14 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:14.434+0000 7f55e947f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:14 np0005539482 ceph-mgr[75473]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:14 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'volumes'
Nov 29 00:08:15 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:15.143+0000 7f55e947f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'zabbix'
Nov 29 00:08:15 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:15.378+0000 7f55e947f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: ms_deliver_dispatch: unhandled message 0x562d8b79f1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.csskcz
Nov 29 00:08:15 np0005539482 podman[75900]: 2025-11-29 05:08:15.303040099 +0000 UTC m=+0.038412083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:15 np0005539482 podman[75900]: 2025-11-29 05:08:15.948388088 +0000 UTC m=+0.683760062 container create 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr handle_mgr_map Activating!
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr handle_mgr_map I am now activating
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.csskcz(active, starting, since 0.573948s)
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"} v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"}]: dispatch
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: balancer
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Manager daemon compute-0.csskcz is now available
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: crash
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [balancer INFO root] Starting
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: devicehealth
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] Starting
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: iostat
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:08:15
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [balancer INFO root] No pools available
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: nfs
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: orchestrator
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: pg_autoscaler
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: progress
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 systemd[1]: Started libpod-conmon-9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06.scope.
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [progress INFO root] Loading...
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [progress INFO root] No stored events to load
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [progress INFO root] Loaded [] historic events
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] recovery thread starting
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] starting setup
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: rbd_support
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: restful
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: status
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"} v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: telemetry
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [restful WARNING root] server not running: no certificate configured
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] PerfHandler: starting
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TaskHandler: starting
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"} v 0) v1
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 00:08:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:08:15 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 00:08:15 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] setup complete
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 00:08:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:16 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: volumes
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: Activating manager daemon compute-0.csskcz
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: Manager daemon compute-0.csskcz is now available
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:16 np0005539482 podman[75900]: 2025-11-29 05:08:16.015346304 +0000 UTC m=+0.750718308 container init 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 00:08:16 np0005539482 podman[75900]: 2025-11-29 05:08:16.022106896 +0000 UTC m=+0.757478860 container start 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:16 np0005539482 podman[75900]: 2025-11-29 05:08:16.025384075 +0000 UTC m=+0.760756039 container attach 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357198134' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]: 
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]: {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "health": {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "status": "HEALTH_OK",
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "checks": {},
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "mutes": []
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    },
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "election_epoch": 5,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "quorum": [
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        0
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    ],
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "quorum_names": [
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "compute-0"
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    ],
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "quorum_age": 22,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "monmap": {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "epoch": 1,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "min_mon_release_name": "reef",
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_mons": 1
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    },
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "osdmap": {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "epoch": 1,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_osds": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_up_osds": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "osd_up_since": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_in_osds": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "osd_in_since": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_remapped_pgs": 0
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    },
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "pgmap": {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "pgs_by_state": [],
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_pgs": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_pools": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_objects": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "data_bytes": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "bytes_used": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "bytes_avail": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "bytes_total": 0
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    },
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "fsmap": {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "epoch": 1,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "by_rank": [],
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "up:standby": 0
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    },
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "mgrmap": {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "available": false,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "num_standbys": 0,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "modules": [
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:            "iostat",
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:            "nfs",
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:            "restful"
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        ],
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "services": {}
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    },
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "servicemap": {
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "epoch": 1,
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:        "services": {}
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    },
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]:    "progress_events": {}
Nov 29 00:08:16 np0005539482 blissful_archimedes[75947]: }
Nov 29 00:08:16 np0005539482 systemd[1]: libpod-9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06.scope: Deactivated successfully.
Nov 29 00:08:16 np0005539482 podman[75900]: 2025-11-29 05:08:16.423920005 +0000 UTC m=+1.159292049 container died 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 00:08:16 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b-merged.mount: Deactivated successfully.
Nov 29 00:08:16 np0005539482 podman[75900]: 2025-11-29 05:08:16.470922842 +0000 UTC m=+1.206294816 container remove 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 00:08:16 np0005539482 systemd[1]: libpod-conmon-9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06.scope: Deactivated successfully.
Nov 29 00:08:16 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.csskcz(active, since 1.60234s)
Nov 29 00:08:17 np0005539482 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 00:08:17 np0005539482 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:17 np0005539482 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:17 np0005539482 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:17 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:18 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.csskcz(active, since 2s)
Nov 29 00:08:18 np0005539482 podman[76035]: 2025-11-29 05:08:18.533424033 +0000 UTC m=+0.039260914 container create 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:08:18 np0005539482 systemd[1]: Started libpod-conmon-8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197.scope.
Nov 29 00:08:18 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:18 np0005539482 podman[76035]: 2025-11-29 05:08:18.599735772 +0000 UTC m=+0.105572653 container init 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:18 np0005539482 podman[76035]: 2025-11-29 05:08:18.604530547 +0000 UTC m=+0.110367418 container start 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:08:18 np0005539482 podman[76035]: 2025-11-29 05:08:18.607608215 +0000 UTC m=+0.113445096 container attach 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:08:18 np0005539482 podman[76035]: 2025-11-29 05:08:18.518066666 +0000 UTC m=+0.023903577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:08:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118182639' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:08:19 np0005539482 zealous_elion[76052]: 
Nov 29 00:08:19 np0005539482 zealous_elion[76052]: {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "health": {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "status": "HEALTH_OK",
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "checks": {},
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "mutes": []
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    },
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "election_epoch": 5,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "quorum": [
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        0
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    ],
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "quorum_names": [
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "compute-0"
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    ],
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "quorum_age": 25,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "monmap": {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "epoch": 1,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "min_mon_release_name": "reef",
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_mons": 1
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    },
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "osdmap": {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "epoch": 1,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_osds": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_up_osds": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "osd_up_since": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_in_osds": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "osd_in_since": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_remapped_pgs": 0
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    },
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "pgmap": {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "pgs_by_state": [],
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_pgs": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_pools": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_objects": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "data_bytes": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "bytes_used": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "bytes_avail": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "bytes_total": 0
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    },
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "fsmap": {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "epoch": 1,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "by_rank": [],
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "up:standby": 0
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    },
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "mgrmap": {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "available": true,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "num_standbys": 0,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "modules": [
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:            "iostat",
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:            "nfs",
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:            "restful"
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        ],
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "services": {}
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    },
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "servicemap": {
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "epoch": 1,
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:        "services": {}
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    },
Nov 29 00:08:19 np0005539482 zealous_elion[76052]:    "progress_events": {}
Nov 29 00:08:19 np0005539482 zealous_elion[76052]: }
Nov 29 00:08:19 np0005539482 systemd[1]: libpod-8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197.scope: Deactivated successfully.
Nov 29 00:08:19 np0005539482 podman[76035]: 2025-11-29 05:08:19.211566898 +0000 UTC m=+0.717403779 container died 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:08:19 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406-merged.mount: Deactivated successfully.
Nov 29 00:08:19 np0005539482 podman[76035]: 2025-11-29 05:08:19.257157051 +0000 UTC m=+0.762993932 container remove 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:08:19 np0005539482 systemd[1]: libpod-conmon-8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197.scope: Deactivated successfully.
Nov 29 00:08:19 np0005539482 podman[76089]: 2025-11-29 05:08:19.322076869 +0000 UTC m=+0.047921035 container create 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:08:19 np0005539482 systemd[1]: Started libpod-conmon-7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478.scope.
Nov 29 00:08:19 np0005539482 podman[76089]: 2025-11-29 05:08:19.295510265 +0000 UTC m=+0.021354491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:19 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:19 np0005539482 podman[76089]: 2025-11-29 05:08:19.423309505 +0000 UTC m=+0.149153731 container init 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:19 np0005539482 podman[76089]: 2025-11-29 05:08:19.434127853 +0000 UTC m=+0.159971989 container start 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:08:19 np0005539482 podman[76089]: 2025-11-29 05:08:19.437528288 +0000 UTC m=+0.163372414 container attach 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:19 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 00:08:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2974597965' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 00:08:19 np0005539482 systemd[1]: libpod-7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478.scope: Deactivated successfully.
Nov 29 00:08:19 np0005539482 podman[76089]: 2025-11-29 05:08:19.991048303 +0000 UTC m=+0.716892489 container died 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:08:20 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2974597965' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 00:08:20 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670-merged.mount: Deactivated successfully.
Nov 29 00:08:20 np0005539482 podman[76089]: 2025-11-29 05:08:20.908199954 +0000 UTC m=+1.634044110 container remove 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:20 np0005539482 systemd[1]: libpod-conmon-7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478.scope: Deactivated successfully.
Nov 29 00:08:20 np0005539482 podman[76148]: 2025-11-29 05:08:20.968640273 +0000 UTC m=+0.041400392 container create c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:08:20 np0005539482 systemd[1]: Started libpod-conmon-c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff.scope.
Nov 29 00:08:21 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:21 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:21 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:21 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:21 np0005539482 podman[76148]: 2025-11-29 05:08:21.039462462 +0000 UTC m=+0.112222611 container init c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:08:21 np0005539482 podman[76148]: 2025-11-29 05:08:20.951303003 +0000 UTC m=+0.024063122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:21 np0005539482 podman[76148]: 2025-11-29 05:08:21.048464069 +0000 UTC m=+0.121224188 container start c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:08:21 np0005539482 podman[76148]: 2025-11-29 05:08:21.05213028 +0000 UTC m=+0.124890419 container attach c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:08:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 00:08:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 00:08:21 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 00:08:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  1: '-n'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  2: 'mgr.compute-0.csskcz'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  3: '-f'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  4: '--setuser'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  5: 'ceph'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  6: '--setgroup'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  7: 'ceph'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  8: '--default-log-to-file=false'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  9: '--default-log-to-journald=true'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 29 00:08:21 np0005539482 ceph-mgr[75473]: mgr respawn  exe_path /proc/self/exe
Nov 29 00:08:21 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.csskcz(active, since 6s)
Nov 29 00:08:21 np0005539482 systemd[1]: libpod-c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff.scope: Deactivated successfully.
Nov 29 00:08:21 np0005539482 podman[76148]: 2025-11-29 05:08:21.90283931 +0000 UTC m=+0.975599439 container died c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:21 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade-merged.mount: Deactivated successfully.
Nov 29 00:08:21 np0005539482 podman[76148]: 2025-11-29 05:08:21.953940055 +0000 UTC m=+1.026700144 container remove c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:21 np0005539482 systemd[1]: libpod-conmon-c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff.scope: Deactivated successfully.
Nov 29 00:08:22 np0005539482 podman[76202]: 2025-11-29 05:08:22.021871628 +0000 UTC m=+0.048227151 container create 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:08:22 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: ignoring --setuser ceph since I am not root
Nov 29 00:08:22 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: ignoring --setgroup ceph since I am not root
Nov 29 00:08:22 np0005539482 ceph-mgr[75473]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 00:08:22 np0005539482 ceph-mgr[75473]: pidfile_write: ignore empty --pid-file
Nov 29 00:08:22 np0005539482 systemd[1]: Started libpod-conmon-68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56.scope.
Nov 29 00:08:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:22 np0005539482 podman[76202]: 2025-11-29 05:08:22.085505598 +0000 UTC m=+0.111861191 container init 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:08:22 np0005539482 podman[76202]: 2025-11-29 05:08:22.091852467 +0000 UTC m=+0.118207980 container start 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:08:22 np0005539482 podman[76202]: 2025-11-29 05:08:21.996586362 +0000 UTC m=+0.022941895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:22 np0005539482 podman[76202]: 2025-11-29 05:08:22.095837415 +0000 UTC m=+0.122192968 container attach 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 00:08:22 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'alerts'
Nov 29 00:08:22 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:22.458+0000 7fa55b499140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 00:08:22 np0005539482 ceph-mgr[75473]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 00:08:22 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'balancer'
Nov 29 00:08:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 00:08:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544642393' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 00:08:22 np0005539482 trusting_lamport[76242]: {
Nov 29 00:08:22 np0005539482 trusting_lamport[76242]:    "epoch": 5,
Nov 29 00:08:22 np0005539482 trusting_lamport[76242]:    "available": true,
Nov 29 00:08:22 np0005539482 trusting_lamport[76242]:    "active_name": "compute-0.csskcz",
Nov 29 00:08:22 np0005539482 trusting_lamport[76242]:    "num_standby": 0
Nov 29 00:08:22 np0005539482 trusting_lamport[76242]: }
Nov 29 00:08:22 np0005539482 systemd[1]: libpod-68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56.scope: Deactivated successfully.
Nov 29 00:08:22 np0005539482 podman[76268]: 2025-11-29 05:08:22.688334267 +0000 UTC m=+0.023358514 container died 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:08:22 np0005539482 systemd[1]: var-lib-containers-storage-overlay-42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194-merged.mount: Deactivated successfully.
Nov 29 00:08:22 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:22.707+0000 7fa55b499140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 00:08:22 np0005539482 ceph-mgr[75473]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 00:08:22 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'cephadm'
Nov 29 00:08:22 np0005539482 podman[76268]: 2025-11-29 05:08:22.726522997 +0000 UTC m=+0.061547214 container remove 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:08:22 np0005539482 systemd[1]: libpod-conmon-68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56.scope: Deactivated successfully.
Nov 29 00:08:22 np0005539482 podman[76283]: 2025-11-29 05:08:22.791896775 +0000 UTC m=+0.040056692 container create 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:22 np0005539482 systemd[1]: Started libpod-conmon-4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0.scope.
Nov 29 00:08:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:22 np0005539482 podman[76283]: 2025-11-29 05:08:22.845621987 +0000 UTC m=+0.093781944 container init 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:08:22 np0005539482 podman[76283]: 2025-11-29 05:08:22.850605126 +0000 UTC m=+0.098765043 container start 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:08:22 np0005539482 podman[76283]: 2025-11-29 05:08:22.85350635 +0000 UTC m=+0.101666267 container attach 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:22 np0005539482 podman[76283]: 2025-11-29 05:08:22.776816473 +0000 UTC m=+0.024976410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:22 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 00:08:24 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'crash'
Nov 29 00:08:24 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:24.876+0000 7fa55b499140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 00:08:24 np0005539482 ceph-mgr[75473]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 00:08:24 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'dashboard'
Nov 29 00:08:26 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'devicehealth'
Nov 29 00:08:26 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:26.490+0000 7fa55b499140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 00:08:26 np0005539482 ceph-mgr[75473]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 00:08:26 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 00:08:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 00:08:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 00:08:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]:  from numpy import show_config as show_numpy_config
Nov 29 00:08:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:27.020+0000 7fa55b499140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 00:08:27 np0005539482 ceph-mgr[75473]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 00:08:27 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'influx'
Nov 29 00:08:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:27.250+0000 7fa55b499140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 00:08:27 np0005539482 ceph-mgr[75473]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 00:08:27 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'insights'
Nov 29 00:08:27 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'iostat'
Nov 29 00:08:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:27.718+0000 7fa55b499140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 00:08:27 np0005539482 ceph-mgr[75473]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 00:08:27 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'k8sevents'
Nov 29 00:08:29 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'localpool'
Nov 29 00:08:29 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 00:08:30 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'mirroring'
Nov 29 00:08:30 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'nfs'
Nov 29 00:08:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:31.241+0000 7fa55b499140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 00:08:31 np0005539482 ceph-mgr[75473]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 00:08:31 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'orchestrator'
Nov 29 00:08:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:31.896+0000 7fa55b499140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:31 np0005539482 ceph-mgr[75473]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:31 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 00:08:32 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.167+0000 7fa55b499140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'osd_support'
Nov 29 00:08:32 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.401+0000 7fa55b499140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 00:08:32 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.681+0000 7fa55b499140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'progress'
Nov 29 00:08:32 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.922+0000 7fa55b499140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 00:08:32 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'prometheus'
Nov 29 00:08:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:33.910+0000 7fa55b499140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 00:08:33 np0005539482 ceph-mgr[75473]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 00:08:33 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'rbd_support'
Nov 29 00:08:34 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:34.206+0000 7fa55b499140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 00:08:34 np0005539482 ceph-mgr[75473]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 00:08:34 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'restful'
Nov 29 00:08:34 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'rgw'
Nov 29 00:08:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:35.575+0000 7fa55b499140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 00:08:35 np0005539482 ceph-mgr[75473]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 00:08:35 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'rook'
Nov 29 00:08:37 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:37.752+0000 7fa55b499140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 00:08:37 np0005539482 ceph-mgr[75473]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 00:08:37 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'selftest'
Nov 29 00:08:38 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:38.006+0000 7fa55b499140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 00:08:38 np0005539482 ceph-mgr[75473]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 00:08:38 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'snap_schedule'
Nov 29 00:08:38 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:38.259+0000 7fa55b499140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 00:08:38 np0005539482 ceph-mgr[75473]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 00:08:38 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'stats'
Nov 29 00:08:38 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'status'
Nov 29 00:08:38 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:38.792+0000 7fa55b499140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 00:08:38 np0005539482 ceph-mgr[75473]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 00:08:38 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'telegraf'
Nov 29 00:08:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:39.040+0000 7fa55b499140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 00:08:39 np0005539482 ceph-mgr[75473]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 00:08:39 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'telemetry'
Nov 29 00:08:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:39.660+0000 7fa55b499140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 00:08:39 np0005539482 ceph-mgr[75473]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 00:08:39 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 00:08:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:40.302+0000 7fa55b499140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:40 np0005539482 ceph-mgr[75473]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 00:08:40 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'volumes'
Nov 29 00:08:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:40.977+0000 7fa55b499140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 00:08:40 np0005539482 ceph-mgr[75473]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 00:08:40 np0005539482 ceph-mgr[75473]: mgr[py] Loading python module 'zabbix'
Nov 29 00:08:41 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:41.221+0000 7fa55b499140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Active manager daemon compute-0.csskcz restarted
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.csskcz
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: ms_deliver_dispatch: unhandled message 0x56323f4bd1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr handle_mgr_map Activating!
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr handle_mgr_map I am now activating
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.csskcz(active, starting, since 0.0174883s)
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: balancer
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Manager daemon compute-0.csskcz is now available
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Starting
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:08:41
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] No pools available
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: Active manager daemon compute-0.csskcz restarted
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: Activating manager daemon compute-0.csskcz
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: Manager daemon compute-0.csskcz is now available
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: cephadm
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: crash
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: devicehealth
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: iostat
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: nfs
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] Starting
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: orchestrator
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: pg_autoscaler
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: progress
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [progress INFO root] Loading...
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [progress INFO root] No stored events to load
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [progress INFO root] Loaded [] historic events
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] recovery thread starting
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] starting setup
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: rbd_support
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: restful
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [restful WARNING root] server not running: no certificate configured
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: status
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: telemetry
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] PerfHandler: starting
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TaskHandler: starting
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"} v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] setup complete
Nov 29 00:08:41 np0005539482 ceph-mgr[75473]: mgr load Constructed class from module: volumes
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 00:08:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.csskcz(active, since 1.02909s)
Nov 29 00:08:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 00:08:42 np0005539482 elegant_tesla[76299]: {
Nov 29 00:08:42 np0005539482 elegant_tesla[76299]:    "mgrmap_epoch": 7,
Nov 29 00:08:42 np0005539482 elegant_tesla[76299]:    "initialized": true
Nov 29 00:08:42 np0005539482 elegant_tesla[76299]: }
Nov 29 00:08:42 np0005539482 systemd[1]: libpod-4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0.scope: Deactivated successfully.
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: Found migration_current of "None". Setting to last migration.
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:42 np0005539482 podman[76446]: 2025-11-29 05:08:42.362748483 +0000 UTC m=+0.041325310 container died 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:42 np0005539482 systemd[1]: var-lib-containers-storage-overlay-cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b-merged.mount: Deactivated successfully.
Nov 29 00:08:42 np0005539482 podman[76446]: 2025-11-29 05:08:42.416487085 +0000 UTC m=+0.095063872 container remove 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:08:42 np0005539482 systemd[1]: libpod-conmon-4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0.scope: Deactivated successfully.
Nov 29 00:08:42 np0005539482 podman[76461]: 2025-11-29 05:08:42.528238513 +0000 UTC m=+0.071071125 container create 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:42 np0005539482 systemd[1]: Started libpod-conmon-8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70.scope.
Nov 29 00:08:42 np0005539482 podman[76461]: 2025-11-29 05:08:42.499790647 +0000 UTC m=+0.042623319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:42 np0005539482 podman[76461]: 2025-11-29 05:08:42.616660497 +0000 UTC m=+0.159493079 container init 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:08:42 np0005539482 podman[76461]: 2025-11-29 05:08:42.630715027 +0000 UTC m=+0.173547619 container start 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:42 np0005539482 podman[76461]: 2025-11-29 05:08:42.634238725 +0000 UTC m=+0.177071307 container attach 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:42 np0005539482 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:42] ENGINE Bus STARTING
Nov 29 00:08:42 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:42] ENGINE Bus STARTING
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Serving on http://192.168.122.100:8765
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Serving on http://192.168.122.100:8765
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Serving on https://192.168.122.100:7150
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Serving on https://192.168.122.100:7150
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Bus STARTED
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Bus STARTED
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Client ('192.168.122.100', 36438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Client ('192.168.122.100', 36438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 00:08:43 np0005539482 systemd[1]: libpod-8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70.scope: Deactivated successfully.
Nov 29 00:08:43 np0005539482 podman[76461]: 2025-11-29 05:08:43.220845537 +0000 UTC m=+0.763678159 container died 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:43 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531-merged.mount: Deactivated successfully.
Nov 29 00:08:43 np0005539482 podman[76461]: 2025-11-29 05:08:43.28150499 +0000 UTC m=+0.824337602 container remove 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:08:43 np0005539482 systemd[1]: libpod-conmon-8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70.scope: Deactivated successfully.
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: [29/Nov/2025:05:08:42] ENGINE Bus STARTING
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:43 np0005539482 podman[76541]: 2025-11-29 05:08:43.349552567 +0000 UTC m=+0.050593463 container create 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:43 np0005539482 systemd[1]: Started libpod-conmon-5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1.scope.
Nov 29 00:08:43 np0005539482 podman[76541]: 2025-11-29 05:08:43.324795913 +0000 UTC m=+0.025836869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:43 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:43 np0005539482 podman[76541]: 2025-11-29 05:08:43.446779796 +0000 UTC m=+0.147820732 container init 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:08:43 np0005539482 podman[76541]: 2025-11-29 05:08:43.458234307 +0000 UTC m=+0.159275213 container start 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:43 np0005539482 podman[76541]: 2025-11-29 05:08:43.462237895 +0000 UTC m=+0.163278841 container attach 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_user
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 00:08:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_config
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 00:08:43 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 00:08:43 np0005539482 busy_morse[76558]: ssh user set to ceph-admin. sudo will be used
Nov 29 00:08:44 np0005539482 systemd[1]: libpod-5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1.scope: Deactivated successfully.
Nov 29 00:08:44 np0005539482 podman[76541]: 2025-11-29 05:08:44.009907951 +0000 UTC m=+0.710948857 container died 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:44 np0005539482 systemd[1]: var-lib-containers-storage-overlay-38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429-merged.mount: Deactivated successfully.
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920999 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:08:44 np0005539482 podman[76541]: 2025-11-29 05:08:44.049626765 +0000 UTC m=+0.750667651 container remove 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:08:44 np0005539482 systemd[1]: libpod-conmon-5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1.scope: Deactivated successfully.
Nov 29 00:08:44 np0005539482 podman[76596]: 2025-11-29 05:08:44.102299383 +0000 UTC m=+0.036501974 container create 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:44 np0005539482 systemd[1]: Started libpod-conmon-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope.
Nov 29 00:08:44 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 podman[76596]: 2025-11-29 05:08:44.158880098 +0000 UTC m=+0.093082699 container init 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:08:44 np0005539482 podman[76596]: 2025-11-29 05:08:44.168463109 +0000 UTC m=+0.102665710 container start 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:44 np0005539482 podman[76596]: 2025-11-29 05:08:44.172459576 +0000 UTC m=+0.106662207 container attach 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:08:44 np0005539482 podman[76596]: 2025-11-29 05:08:44.083177813 +0000 UTC m=+0.017380404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.csskcz(active, since 2s)
Nov 29 00:08:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:44 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 00:08:44 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 00:08:44 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Set ssh private key
Nov 29 00:08:44 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 00:08:44 np0005539482 systemd[1]: libpod-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope: Deactivated successfully.
Nov 29 00:08:44 np0005539482 conmon[76612]: conmon 0816a30fe9916fc923db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope/container/memory.events
Nov 29 00:08:44 np0005539482 podman[76596]: 2025-11-29 05:08:44.700376188 +0000 UTC m=+0.634578759 container died 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:44 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29-merged.mount: Deactivated successfully.
Nov 29 00:08:44 np0005539482 podman[76596]: 2025-11-29 05:08:44.741547513 +0000 UTC m=+0.675750094 container remove 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:08:44 np0005539482 systemd[1]: libpod-conmon-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope: Deactivated successfully.
Nov 29 00:08:44 np0005539482 podman[76651]: 2025-11-29 05:08:44.798013785 +0000 UTC m=+0.040673956 container create 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 00:08:44 np0005539482 systemd[1]: Started libpod-conmon-3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c.scope.
Nov 29 00:08:44 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 podman[76651]: 2025-11-29 05:08:44.780027729 +0000 UTC m=+0.022687890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:44 np0005539482 podman[76651]: 2025-11-29 05:08:44.886127893 +0000 UTC m=+0.128788064 container init 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:44 np0005539482 podman[76651]: 2025-11-29 05:08:44.898301421 +0000 UTC m=+0.140961562 container start 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:44 np0005539482 podman[76651]: 2025-11-29 05:08:44.901827478 +0000 UTC m=+0.144487809 container attach 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Serving on http://192.168.122.100:8765
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Serving on https://192.168.122.100:7150
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Bus STARTED
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Client ('192.168.122.100', 36438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: Set ssh ssh_user
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: Set ssh ssh_config
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: ssh user set to ceph-admin. sudo will be used
Nov 29 00:08:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:45 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 00:08:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:45 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 00:08:45 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 00:08:45 np0005539482 systemd[1]: libpod-3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c.scope: Deactivated successfully.
Nov 29 00:08:45 np0005539482 podman[76651]: 2025-11-29 05:08:45.408574184 +0000 UTC m=+0.651234315 container died 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:45 np0005539482 systemd[1]: var-lib-containers-storage-overlay-00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179-merged.mount: Deactivated successfully.
Nov 29 00:08:45 np0005539482 podman[76651]: 2025-11-29 05:08:45.445375354 +0000 UTC m=+0.688035485 container remove 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:08:45 np0005539482 systemd[1]: libpod-conmon-3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c.scope: Deactivated successfully.
Nov 29 00:08:45 np0005539482 podman[76706]: 2025-11-29 05:08:45.505192609 +0000 UTC m=+0.043432807 container create 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:08:45 np0005539482 systemd[1]: Started libpod-conmon-50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2.scope.
Nov 29 00:08:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:45 np0005539482 podman[76706]: 2025-11-29 05:08:45.48435619 +0000 UTC m=+0.022596458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:45 np0005539482 podman[76706]: 2025-11-29 05:08:45.592950019 +0000 UTC m=+0.131190277 container init 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:45 np0005539482 podman[76706]: 2025-11-29 05:08:45.601918126 +0000 UTC m=+0.140158314 container start 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:08:45 np0005539482 podman[76706]: 2025-11-29 05:08:45.605426334 +0000 UTC m=+0.143666592 container attach 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:45 np0005539482 ceph-mon[75176]: Set ssh ssh_identity_key
Nov 29 00:08:45 np0005539482 ceph-mon[75176]: Set ssh private key
Nov 29 00:08:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:46 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:46 np0005539482 nifty_keller[76723]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDS8PFZNHYEMG3QzJ0T+fsq/aAtPRNTx+aMKwd2m8VVUO19nX++nSeRt/d87czPj4O9wzAPcH/7BshbPhc5xJnUkkoie96X/xYUNJBzgPQ4C/dMz82vVAk18swfRLBdsW74BqGEu7OERVdC7Y/xtEZFAjVKTOVZYkAYbfZmvu44ueA6sdnziaQMAmYvaOUziZoMxb3in8kywmEgIPvNgynAuegdw1FsImfkj93iNTkAl3rt88tuZuEyivCdteCLNGs4gfAF486hIPVkr8c47sBLgeg/miI6UmsvJmZvUwcTFkJpfkr00fwvW85N5NVrKsd0ZrcJuYQHbylSWbgXPdHWDIMsc0DmLPgyBS3+KP6Z/1lceD5uCbPPibt7CfECZw5WGJ1esNQTBxNIw57Vi4zW0dT227oG7qCoWQ3pkr7UGt2XDzM8Fek1Z9GigPmtTTmcWypU9skH74gbbAcVFyD9Cl9GEwE6Kfyy6OuFPR/QBCYYcXV0+wlJxxr3VRdVQ40= zuul@controller
Nov 29 00:08:46 np0005539482 systemd[1]: libpod-50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2.scope: Deactivated successfully.
Nov 29 00:08:46 np0005539482 podman[76706]: 2025-11-29 05:08:46.098168771 +0000 UTC m=+0.636408989 container died 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:46 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2-merged.mount: Deactivated successfully.
Nov 29 00:08:46 np0005539482 podman[76706]: 2025-11-29 05:08:46.149426318 +0000 UTC m=+0.687666516 container remove 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:46 np0005539482 systemd[1]: libpod-conmon-50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2.scope: Deactivated successfully.
Nov 29 00:08:46 np0005539482 podman[76762]: 2025-11-29 05:08:46.217534926 +0000 UTC m=+0.046841291 container create 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:08:46 np0005539482 systemd[1]: Started libpod-conmon-34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee.scope.
Nov 29 00:08:46 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:46 np0005539482 podman[76762]: 2025-11-29 05:08:46.191853782 +0000 UTC m=+0.021160137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:46 np0005539482 podman[76762]: 2025-11-29 05:08:46.302308221 +0000 UTC m=+0.131614606 container init 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:46 np0005539482 podman[76762]: 2025-11-29 05:08:46.311690787 +0000 UTC m=+0.140997142 container start 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:08:46 np0005539482 podman[76762]: 2025-11-29 05:08:46.315254256 +0000 UTC m=+0.144560631 container attach 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:08:46 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:47 np0005539482 ceph-mon[75176]: Set ssh ssh_identity_pub
Nov 29 00:08:47 np0005539482 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 00:08:47 np0005539482 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 00:08:47 np0005539482 systemd-logind[793]: New session 20 of user ceph-admin.
Nov 29 00:08:47 np0005539482 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 00:08:47 np0005539482 systemd[1]: Starting User Manager for UID 42477...
Nov 29 00:08:47 np0005539482 systemd-logind[793]: New session 22 of user ceph-admin.
Nov 29 00:08:47 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:47 np0005539482 systemd[76809]: Queued start job for default target Main User Target.
Nov 29 00:08:47 np0005539482 systemd[76809]: Created slice User Application Slice.
Nov 29 00:08:47 np0005539482 systemd[76809]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 00:08:47 np0005539482 systemd[76809]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 00:08:47 np0005539482 systemd[76809]: Reached target Paths.
Nov 29 00:08:47 np0005539482 systemd[76809]: Reached target Timers.
Nov 29 00:08:47 np0005539482 systemd[76809]: Starting D-Bus User Message Bus Socket...
Nov 29 00:08:47 np0005539482 systemd[76809]: Starting Create User's Volatile Files and Directories...
Nov 29 00:08:47 np0005539482 systemd[76809]: Finished Create User's Volatile Files and Directories.
Nov 29 00:08:47 np0005539482 systemd[76809]: Listening on D-Bus User Message Bus Socket.
Nov 29 00:08:47 np0005539482 systemd[76809]: Reached target Sockets.
Nov 29 00:08:47 np0005539482 systemd[76809]: Reached target Basic System.
Nov 29 00:08:47 np0005539482 systemd[76809]: Reached target Main User Target.
Nov 29 00:08:47 np0005539482 systemd[76809]: Startup finished in 164ms.
Nov 29 00:08:47 np0005539482 systemd[1]: Started User Manager for UID 42477.
Nov 29 00:08:47 np0005539482 systemd[1]: Started Session 20 of User ceph-admin.
Nov 29 00:08:47 np0005539482 systemd[1]: Started Session 22 of User ceph-admin.
Nov 29 00:08:47 np0005539482 systemd-logind[793]: New session 23 of user ceph-admin.
Nov 29 00:08:47 np0005539482 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 00:08:48 np0005539482 systemd-logind[793]: New session 24 of user ceph-admin.
Nov 29 00:08:48 np0005539482 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 00:08:48 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 00:08:48 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 00:08:48 np0005539482 systemd-logind[793]: New session 25 of user ceph-admin.
Nov 29 00:08:48 np0005539482 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 00:08:49 np0005539482 systemd-logind[793]: New session 26 of user ceph-admin.
Nov 29 00:08:49 np0005539482 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 00:08:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052989 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:08:49 np0005539482 ceph-mon[75176]: Deploying cephadm binary to compute-0
Nov 29 00:08:49 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:49 np0005539482 systemd-logind[793]: New session 27 of user ceph-admin.
Nov 29 00:08:49 np0005539482 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 00:08:50 np0005539482 systemd-logind[793]: New session 28 of user ceph-admin.
Nov 29 00:08:50 np0005539482 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 00:08:50 np0005539482 systemd-logind[793]: New session 29 of user ceph-admin.
Nov 29 00:08:50 np0005539482 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 00:08:50 np0005539482 systemd-logind[793]: New session 30 of user ceph-admin.
Nov 29 00:08:50 np0005539482 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 00:08:51 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:51 np0005539482 systemd-logind[793]: New session 31 of user ceph-admin.
Nov 29 00:08:51 np0005539482 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 00:08:51 np0005539482 systemd-logind[793]: New session 32 of user ceph-admin.
Nov 29 00:08:51 np0005539482 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 00:08:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 00:08:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:52 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Added host compute-0
Nov 29 00:08:52 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 00:08:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 00:08:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 00:08:52 np0005539482 compassionate_solomon[76779]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 00:08:52 np0005539482 systemd[1]: libpod-34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee.scope: Deactivated successfully.
Nov 29 00:08:52 np0005539482 podman[76762]: 2025-11-29 05:08:52.322058971 +0000 UTC m=+6.151365346 container died 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:52 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c-merged.mount: Deactivated successfully.
Nov 29 00:08:52 np0005539482 podman[76762]: 2025-11-29 05:08:52.374482554 +0000 UTC m=+6.203788879 container remove 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:52 np0005539482 systemd[1]: libpod-conmon-34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee.scope: Deactivated successfully.
Nov 29 00:08:52 np0005539482 podman[77446]: 2025-11-29 05:08:52.443666676 +0000 UTC m=+0.049646964 container create 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:08:52 np0005539482 systemd[1]: Started libpod-conmon-9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e.scope.
Nov 29 00:08:52 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:52 np0005539482 podman[77446]: 2025-11-29 05:08:52.419177087 +0000 UTC m=+0.025157405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:52 np0005539482 podman[77446]: 2025-11-29 05:08:52.530880324 +0000 UTC m=+0.136860622 container init 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:52 np0005539482 podman[77446]: 2025-11-29 05:08:52.537352656 +0000 UTC m=+0.143332934 container start 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:08:52 np0005539482 podman[77446]: 2025-11-29 05:08:52.540494605 +0000 UTC m=+0.146474913 container attach 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:08:52 np0005539482 podman[77569]: 2025-11-29 05:08:52.84035351 +0000 UTC m=+0.061879162 container create 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:52 np0005539482 systemd[1]: Started libpod-conmon-0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550.scope.
Nov 29 00:08:52 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:52 np0005539482 podman[77569]: 2025-11-29 05:08:52.899147563 +0000 UTC m=+0.120673215 container init 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:08:52 np0005539482 podman[77569]: 2025-11-29 05:08:52.911328711 +0000 UTC m=+0.132854403 container start 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:08:52 np0005539482 podman[77569]: 2025-11-29 05:08:52.817713252 +0000 UTC m=+0.039238964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:52 np0005539482 podman[77569]: 2025-11-29 05:08:52.915287239 +0000 UTC m=+0.136812891 container attach 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:53 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:53 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 00:08:53 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:53 np0005539482 wonderful_driscoll[77493]: Scheduled mon update...
Nov 29 00:08:53 np0005539482 systemd[1]: libpod-9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e.scope: Deactivated successfully.
Nov 29 00:08:53 np0005539482 podman[77446]: 2025-11-29 05:08:53.088325154 +0000 UTC m=+0.694305472 container died 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:08:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a-merged.mount: Deactivated successfully.
Nov 29 00:08:53 np0005539482 podman[77446]: 2025-11-29 05:08:53.128830965 +0000 UTC m=+0.734811283 container remove 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:08:53 np0005539482 systemd[1]: libpod-conmon-9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e.scope: Deactivated successfully.
Nov 29 00:08:53 np0005539482 podman[77624]: 2025-11-29 05:08:53.191013103 +0000 UTC m=+0.035649655 container create 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:53 np0005539482 relaxed_davinci[77604]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 00:08:53 np0005539482 podman[77569]: 2025-11-29 05:08:53.21222517 +0000 UTC m=+0.433750822 container died 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:08:53 np0005539482 systemd[1]: Started libpod-conmon-786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608.scope.
Nov 29 00:08:53 np0005539482 systemd[1]: libpod-0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550.scope: Deactivated successfully.
Nov 29 00:08:53 np0005539482 podman[77569]: 2025-11-29 05:08:53.244416858 +0000 UTC m=+0.465942510 container remove 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:53 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:53 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:53 np0005539482 systemd[1]: libpod-conmon-0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550.scope: Deactivated successfully.
Nov 29 00:08:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:53 np0005539482 podman[77624]: 2025-11-29 05:08:53.262348222 +0000 UTC m=+0.106984784 container init 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:53 np0005539482 podman[77624]: 2025-11-29 05:08:53.267721679 +0000 UTC m=+0.112358231 container start 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:08:53 np0005539482 podman[77624]: 2025-11-29 05:08:53.270631584 +0000 UTC m=+0.115268166 container attach 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:08:53 np0005539482 podman[77624]: 2025-11-29 05:08:53.175935601 +0000 UTC m=+0.020572173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: Added host compute-0
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: Saving service mon spec with placement count:5
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bd8e940ee27e6268b82c1967ba0634fbd526883a094ff47b788fb91b10daf543-merged.mount: Deactivated successfully.
Nov 29 00:08:53 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:53 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 00:08:53 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:53 np0005539482 optimistic_blackwell[77647]: Scheduled mgr update...
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:08:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:53 np0005539482 systemd[1]: libpod-786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608.scope: Deactivated successfully.
Nov 29 00:08:53 np0005539482 podman[77624]: 2025-11-29 05:08:53.817027992 +0000 UTC m=+0.661664574 container died 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:08:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc-merged.mount: Deactivated successfully.
Nov 29 00:08:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:08:54 np0005539482 podman[77624]: 2025-11-29 05:08:54.160884254 +0000 UTC m=+1.005520846 container remove 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:08:54 np0005539482 systemd[1]: libpod-conmon-786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608.scope: Deactivated successfully.
Nov 29 00:08:54 np0005539482 podman[77905]: 2025-11-29 05:08:54.2366535 +0000 UTC m=+0.045769207 container create 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:08:54 np0005539482 systemd[1]: Started libpod-conmon-424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4.scope.
Nov 29 00:08:54 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:54 np0005539482 podman[77905]: 2025-11-29 05:08:54.222391897 +0000 UTC m=+0.031507624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:54 np0005539482 podman[77905]: 2025-11-29 05:08:54.331840224 +0000 UTC m=+0.140956001 container init 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:54 np0005539482 podman[77905]: 2025-11-29 05:08:54.33937176 +0000 UTC m=+0.148487507 container start 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:08:54 np0005539482 podman[77905]: 2025-11-29 05:08:54.343306547 +0000 UTC m=+0.152422304 container attach 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:54 np0005539482 podman[78020]: 2025-11-29 05:08:54.738209212 +0000 UTC m=+0.052181139 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:54 np0005539482 ceph-mon[75176]: Saving service mgr spec with placement count:2
Nov 29 00:08:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:54 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:54 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 00:08:54 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 00:08:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 00:08:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:54 np0005539482 gifted_agnesi[77924]: Scheduled crash update...
Nov 29 00:08:54 np0005539482 systemd[1]: libpod-424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4.scope: Deactivated successfully.
Nov 29 00:08:54 np0005539482 podman[77905]: 2025-11-29 05:08:54.864130122 +0000 UTC m=+0.673245839 container died 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:08:54 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678-merged.mount: Deactivated successfully.
Nov 29 00:08:54 np0005539482 podman[77905]: 2025-11-29 05:08:54.927843873 +0000 UTC m=+0.736959610 container remove 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:08:54 np0005539482 systemd[1]: libpod-conmon-424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4.scope: Deactivated successfully.
Nov 29 00:08:54 np0005539482 podman[78055]: 2025-11-29 05:08:54.993503527 +0000 UTC m=+0.045327847 container create 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:08:55 np0005539482 systemd[1]: Started libpod-conmon-597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769.scope.
Nov 29 00:08:55 np0005539482 podman[78020]: 2025-11-29 05:08:55.04820788 +0000 UTC m=+0.362179807 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:55 np0005539482 podman[78055]: 2025-11-29 05:08:54.972084636 +0000 UTC m=+0.023909036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:55 np0005539482 podman[78055]: 2025-11-29 05:08:55.076046193 +0000 UTC m=+0.127870533 container init 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:08:55 np0005539482 podman[78055]: 2025-11-29 05:08:55.082122186 +0000 UTC m=+0.133946506 container start 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:55 np0005539482 podman[78055]: 2025-11-29 05:08:55.085701395 +0000 UTC m=+0.137525745 container attach 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:55 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:55 np0005539482 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78239 (sysctl)
Nov 29 00:08:55 np0005539482 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 00:08:55 np0005539482 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3994711497' entity='client.admin' 
Nov 29 00:08:55 np0005539482 systemd[1]: libpod-597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769.scope: Deactivated successfully.
Nov 29 00:08:55 np0005539482 podman[78055]: 2025-11-29 05:08:55.67606908 +0000 UTC m=+0.727893400 container died 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 00:08:55 np0005539482 systemd[1]: var-lib-containers-storage-overlay-63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552-merged.mount: Deactivated successfully.
Nov 29 00:08:55 np0005539482 podman[78055]: 2025-11-29 05:08:55.71563832 +0000 UTC m=+0.767462640 container remove 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:08:55 np0005539482 systemd[1]: libpod-conmon-597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769.scope: Deactivated successfully.
Nov 29 00:08:55 np0005539482 podman[78256]: 2025-11-29 05:08:55.776251703 +0000 UTC m=+0.041800380 container create 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:08:55 np0005539482 systemd[1]: Started libpod-conmon-08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6.scope.
Nov 29 00:08:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: Saving service crash spec with placement *
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:55 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/3994711497' entity='client.admin' 
Nov 29 00:08:55 np0005539482 podman[78256]: 2025-11-29 05:08:55.759412053 +0000 UTC m=+0.024960720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:55 np0005539482 podman[78256]: 2025-11-29 05:08:55.862509211 +0000 UTC m=+0.128057878 container init 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:55 np0005539482 podman[78256]: 2025-11-29 05:08:55.86889529 +0000 UTC m=+0.134443937 container start 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:08:55 np0005539482 podman[78256]: 2025-11-29 05:08:55.872742945 +0000 UTC m=+0.138291622 container attach 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:56 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 00:08:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:08:56 np0005539482 systemd[1]: libpod-08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6.scope: Deactivated successfully.
Nov 29 00:08:56 np0005539482 podman[78256]: 2025-11-29 05:08:56.398354215 +0000 UTC m=+0.663902862 container died 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:08:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d-merged.mount: Deactivated successfully.
Nov 29 00:08:56 np0005539482 podman[78256]: 2025-11-29 05:08:56.450586845 +0000 UTC m=+0.716135492 container remove 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:56 np0005539482 systemd[1]: libpod-conmon-08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6.scope: Deactivated successfully.
Nov 29 00:08:56 np0005539482 podman[78467]: 2025-11-29 05:08:56.507518066 +0000 UTC m=+0.038445976 container create 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:56 np0005539482 systemd[1]: Started libpod-conmon-3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1.scope.
Nov 29 00:08:56 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:56 np0005539482 podman[78467]: 2025-11-29 05:08:56.492984337 +0000 UTC m=+0.023912277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:56 np0005539482 podman[78467]: 2025-11-29 05:08:56.59269649 +0000 UTC m=+0.123624410 container init 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:56 np0005539482 podman[78467]: 2025-11-29 05:08:56.597840803 +0000 UTC m=+0.128768713 container start 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:08:56 np0005539482 podman[78467]: 2025-11-29 05:08:56.600822639 +0000 UTC m=+0.131750579 container attach 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:56 np0005539482 podman[78624]: 2025-11-29 05:08:56.948291981 +0000 UTC m=+0.036702998 container create 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:08:56 np0005539482 systemd[1]: Started libpod-conmon-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope.
Nov 29 00:08:57 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:57 np0005539482 podman[78624]: 2025-11-29 05:08:56.931490722 +0000 UTC m=+0.019901759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:08:57 np0005539482 podman[78624]: 2025-11-29 05:08:57.028523855 +0000 UTC m=+0.116934892 container init 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:08:57 np0005539482 podman[78624]: 2025-11-29 05:08:57.034748973 +0000 UTC m=+0.123159990 container start 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:08:57 np0005539482 crazy_panini[78641]: 167 167
Nov 29 00:08:57 np0005539482 systemd[1]: libpod-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope: Deactivated successfully.
Nov 29 00:08:57 np0005539482 podman[78624]: 2025-11-29 05:08:57.038079656 +0000 UTC m=+0.126490693 container attach 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:08:57 np0005539482 conmon[78641]: conmon 31b972cb37742d72368f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope/container/memory.events
Nov 29 00:08:57 np0005539482 podman[78624]: 2025-11-29 05:08:57.04190002 +0000 UTC m=+0.130311027 container died 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:08:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-74a79b72a9f62a6e5ab6e0f7ddff1fea0a95c791bbc56cbfe850ec76f3d1ed86-merged.mount: Deactivated successfully.
Nov 29 00:08:57 np0005539482 podman[78624]: 2025-11-29 05:08:57.077918783 +0000 UTC m=+0.166329820 container remove 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:08:57 np0005539482 systemd[1]: libpod-conmon-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope: Deactivated successfully.
Nov 29 00:08:57 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:08:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 00:08:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:57 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 00:08:57 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 00:08:57 np0005539482 flamboyant_lewin[78518]: Added label _admin to host compute-0
Nov 29 00:08:57 np0005539482 systemd[1]: libpod-3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1.scope: Deactivated successfully.
Nov 29 00:08:57 np0005539482 podman[78467]: 2025-11-29 05:08:57.138618057 +0000 UTC m=+0.669546007 container died 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:08:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816-merged.mount: Deactivated successfully.
Nov 29 00:08:57 np0005539482 podman[78467]: 2025-11-29 05:08:57.183708649 +0000 UTC m=+0.714636569 container remove 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:08:57 np0005539482 systemd[1]: libpod-conmon-3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1.scope: Deactivated successfully.
Nov 29 00:08:57 np0005539482 podman[78673]: 2025-11-29 05:08:57.238563266 +0000 UTC m=+0.037229480 container create 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:08:57 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:57 np0005539482 systemd[1]: Started libpod-conmon-91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6.scope.
Nov 29 00:08:57 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:57 np0005539482 podman[78673]: 2025-11-29 05:08:57.222801759 +0000 UTC m=+0.021467993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:57 np0005539482 podman[78673]: 2025-11-29 05:08:57.322731987 +0000 UTC m=+0.121398281 container init 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:57 np0005539482 podman[78673]: 2025-11-29 05:08:57.327692896 +0000 UTC m=+0.126359110 container start 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:08:57 np0005539482 podman[78673]: 2025-11-29 05:08:57.331465909 +0000 UTC m=+0.130132123 container attach 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:08:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:08:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 00:08:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2928868753' entity='client.admin' 
Nov 29 00:08:57 np0005539482 systemd[1]: libpod-91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6.scope: Deactivated successfully.
Nov 29 00:08:57 np0005539482 podman[78717]: 2025-11-29 05:08:57.90758592 +0000 UTC m=+0.029768896 container died 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:08:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d-merged.mount: Deactivated successfully.
Nov 29 00:08:57 np0005539482 podman[78717]: 2025-11-29 05:08:57.961562048 +0000 UTC m=+0.083745004 container remove 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:57 np0005539482 systemd[1]: libpod-conmon-91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6.scope: Deactivated successfully.
Nov 29 00:08:58 np0005539482 podman[78732]: 2025-11-29 05:08:58.064968502 +0000 UTC m=+0.062096577 container create e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:08:58 np0005539482 systemd[1]: Started libpod-conmon-e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a.scope.
Nov 29 00:08:58 np0005539482 podman[78732]: 2025-11-29 05:08:58.040738748 +0000 UTC m=+0.037866893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:58 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:58 np0005539482 podman[78732]: 2025-11-29 05:08:58.159782087 +0000 UTC m=+0.156910222 container init e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:58 np0005539482 podman[78732]: 2025-11-29 05:08:58.169516301 +0000 UTC m=+0.166644406 container start e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:08:58 np0005539482 podman[78732]: 2025-11-29 05:08:58.173778114 +0000 UTC m=+0.170906219 container attach e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:08:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 00:08:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1442575940' entity='client.admin' 
Nov 29 00:08:58 np0005539482 compassionate_wiles[78749]: set mgr/dashboard/cluster/status
Nov 29 00:08:58 np0005539482 systemd[1]: libpod-e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a.scope: Deactivated successfully.
Nov 29 00:08:58 np0005539482 podman[78732]: 2025-11-29 05:08:58.834154259 +0000 UTC m=+0.831282334 container died e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:08:58 np0005539482 ceph-mon[75176]: Added label _admin to host compute-0
Nov 29 00:08:58 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2928868753' entity='client.admin' 
Nov 29 00:08:58 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1442575940' entity='client.admin' 
Nov 29 00:08:58 np0005539482 systemd[1]: var-lib-containers-storage-overlay-8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3-merged.mount: Deactivated successfully.
Nov 29 00:08:58 np0005539482 podman[78732]: 2025-11-29 05:08:58.883181117 +0000 UTC m=+0.880309182 container remove e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:08:58 np0005539482 systemd[1]: libpod-conmon-e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a.scope: Deactivated successfully.
Nov 29 00:08:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:08:59 np0005539482 podman[78794]: 2025-11-29 05:08:59.100105009 +0000 UTC m=+0.062056976 container create 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:08:59 np0005539482 systemd[1]: Started libpod-conmon-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope.
Nov 29 00:08:59 np0005539482 podman[78794]: 2025-11-29 05:08:59.073965873 +0000 UTC m=+0.035917890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:08:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:59 np0005539482 podman[78794]: 2025-11-29 05:08:59.206232643 +0000 UTC m=+0.168184610 container init 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 00:08:59 np0005539482 podman[78794]: 2025-11-29 05:08:59.220635889 +0000 UTC m=+0.182587826 container start 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:59 np0005539482 podman[78794]: 2025-11-29 05:08:59.224562546 +0000 UTC m=+0.186514483 container attach 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:08:59 np0005539482 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 00:08:59 np0005539482 python3[78841]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:08:59 np0005539482 podman[78842]: 2025-11-29 05:08:59.586904556 +0000 UTC m=+0.080919431 container create 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:59 np0005539482 podman[78842]: 2025-11-29 05:08:59.553871839 +0000 UTC m=+0.047886764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:08:59 np0005539482 systemd[1]: Started libpod-conmon-60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f.scope.
Nov 29 00:08:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:08:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd788860352ad5b41e56aa95240c7584d28efb3bfe6d1c4a7d79d99b28de10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd788860352ad5b41e56aa95240c7584d28efb3bfe6d1c4a7d79d99b28de10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:08:59 np0005539482 podman[78842]: 2025-11-29 05:08:59.743992341 +0000 UTC m=+0.238007276 container init 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:08:59 np0005539482 podman[78842]: 2025-11-29 05:08:59.752045058 +0000 UTC m=+0.246059943 container start 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:08:59 np0005539482 podman[78842]: 2025-11-29 05:08:59.756927645 +0000 UTC m=+0.250942590 container attach 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523397030' entity='client.admin' 
Nov 29 00:09:00 np0005539482 systemd[1]: libpod-60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f.scope: Deactivated successfully.
Nov 29 00:09:00 np0005539482 podman[78842]: 2025-11-29 05:09:00.347532565 +0000 UTC m=+0.841547420 container died 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:09:00 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1ddd788860352ad5b41e56aa95240c7584d28efb3bfe6d1c4a7d79d99b28de10-merged.mount: Deactivated successfully.
Nov 29 00:09:00 np0005539482 podman[78842]: 2025-11-29 05:09:00.395445458 +0000 UTC m=+0.889460313 container remove 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:09:00 np0005539482 systemd[1]: libpod-conmon-60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f.scope: Deactivated successfully.
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]: [
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:    {
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "available": false,
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "ceph_device": false,
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "lsm_data": {},
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "lvs": [],
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "path": "/dev/sr0",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "rejected_reasons": [
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "Has a FileSystem",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "Insufficient space (<5GB)"
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        ],
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        "sys_api": {
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "actuators": null,
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "device_nodes": "sr0",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "devname": "sr0",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "human_readable_size": "482.00 KB",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "id_bus": "ata",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "model": "QEMU DVD-ROM",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "nr_requests": "2",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "parent": "/dev/sr0",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "partitions": {},
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "path": "/dev/sr0",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "removable": "1",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "rev": "2.5+",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "ro": "0",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "rotational": "1",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "sas_address": "",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "sas_device_handle": "",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "scheduler_mode": "mq-deadline",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "sectors": 0,
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "sectorsize": "2048",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "size": 493568.0,
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "support_discard": "2048",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "type": "disk",
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:            "vendor": "QEMU"
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:        }
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]:    }
Nov 29 00:09:00 np0005539482 musing_varahamihira[78810]: ]
Nov 29 00:09:00 np0005539482 systemd[1]: libpod-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope: Deactivated successfully.
Nov 29 00:09:00 np0005539482 podman[78794]: 2025-11-29 05:09:00.661726865 +0000 UTC m=+1.623678802 container died 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 00:09:00 np0005539482 systemd[1]: libpod-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope: Consumed 1.474s CPU time.
Nov 29 00:09:00 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28-merged.mount: Deactivated successfully.
Nov 29 00:09:00 np0005539482 podman[78794]: 2025-11-29 05:09:00.725926367 +0000 UTC m=+1.687878294 container remove 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:09:00 np0005539482 systemd[1]: libpod-conmon-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope: Deactivated successfully.
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:09:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:00 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 00:09:00 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 00:09:01 np0005539482 ceph-mgr[75473]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 00:09:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1523397030' entity='client.admin' 
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 00:09:01 np0005539482 ceph-mon[75176]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 00:09:01 np0005539482 ansible-async_wrapper.py[81126]: Invoked with j676110802398 30 /home/zuul/.ansible/tmp/ansible-tmp-1764392940.7766569-36445-157247356225760/AnsiballZ_command.py _
Nov 29 00:09:01 np0005539482 ansible-async_wrapper.py[81192]: Starting module and watcher
Nov 29 00:09:01 np0005539482 ansible-async_wrapper.py[81192]: Start watching 81193 (30)
Nov 29 00:09:01 np0005539482 ansible-async_wrapper.py[81193]: Start module (81193)
Nov 29 00:09:01 np0005539482 ansible-async_wrapper.py[81126]: Return async_wrapper task started.
Nov 29 00:09:01 np0005539482 python3[81195]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:01 np0005539482 podman[81266]: 2025-11-29 05:09:01.72900771 +0000 UTC m=+0.044283886 container create 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:09:01 np0005539482 systemd[1]: Started libpod-conmon-88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735.scope.
Nov 29 00:09:01 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:01 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73e4e7f968ca73f71fcf1081e0b2bf775174a9aaa4469e13f99e3431aef8ffc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:01 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73e4e7f968ca73f71fcf1081e0b2bf775174a9aaa4469e13f99e3431aef8ffc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:01 np0005539482 podman[81266]: 2025-11-29 05:09:01.708188582 +0000 UTC m=+0.023464658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:01 np0005539482 podman[81266]: 2025-11-29 05:09:01.80544513 +0000 UTC m=+0.120721196 container init 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:09:01 np0005539482 podman[81266]: 2025-11-29 05:09:01.81134239 +0000 UTC m=+0.126618436 container start 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:09:01 np0005539482 podman[81266]: 2025-11-29 05:09:01.814580962 +0000 UTC m=+0.129857008 container attach 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:09:01 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf
Nov 29 00:09:01 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf
Nov 29 00:09:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 00:09:02 np0005539482 compassionate_shannon[81307]: 
Nov 29 00:09:02 np0005539482 compassionate_shannon[81307]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 00:09:02 np0005539482 ceph-mon[75176]: Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf
Nov 29 00:09:02 np0005539482 systemd[1]: libpod-88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735.scope: Deactivated successfully.
Nov 29 00:09:02 np0005539482 podman[81266]: 2025-11-29 05:09:02.34374773 +0000 UTC m=+0.659023816 container died 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:02 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b73e4e7f968ca73f71fcf1081e0b2bf775174a9aaa4469e13f99e3431aef8ffc-merged.mount: Deactivated successfully.
Nov 29 00:09:02 np0005539482 podman[81266]: 2025-11-29 05:09:02.387982113 +0000 UTC m=+0.703258159 container remove 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:09:02 np0005539482 systemd[1]: libpod-conmon-88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735.scope: Deactivated successfully.
Nov 29 00:09:02 np0005539482 ansible-async_wrapper.py[81193]: Module complete (81193)
Nov 29 00:09:02 np0005539482 python3[81793]: ansible-ansible.legacy.async_status Invoked with jid=j676110802398.81126 mode=status _async_dir=/root/.ansible_async
Nov 29 00:09:03 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 00:09:03 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 00:09:03 np0005539482 python3[81985]: ansible-ansible.legacy.async_status Invoked with jid=j676110802398.81126 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 00:09:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:03 np0005539482 ceph-mon[75176]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 00:09:03 np0005539482 python3[82193]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:09:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:04 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring
Nov 29 00:09:04 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring
Nov 29 00:09:04 np0005539482 python3[82366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:04 np0005539482 podman[82435]: 2025-11-29 05:09:04.316239613 +0000 UTC m=+0.053743053 container create 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:04 np0005539482 systemd[1]: Started libpod-conmon-9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8.scope.
Nov 29 00:09:04 np0005539482 podman[82435]: 2025-11-29 05:09:04.291676323 +0000 UTC m=+0.029179743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:04 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:04 np0005539482 podman[82435]: 2025-11-29 05:09:04.414839612 +0000 UTC m=+0.152343062 container init 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:04 np0005539482 podman[82435]: 2025-11-29 05:09:04.429363501 +0000 UTC m=+0.166866911 container start 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:09:04 np0005539482 podman[82435]: 2025-11-29 05:09:04.432580433 +0000 UTC m=+0.170083853 container attach 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:04 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 00:09:04 np0005539482 suspicious_heisenberg[82480]: 
Nov 29 00:09:04 np0005539482 suspicious_heisenberg[82480]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 00:09:05 np0005539482 systemd[1]: libpod-9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8.scope: Deactivated successfully.
Nov 29 00:09:05 np0005539482 podman[82435]: 2025-11-29 05:09:05.005243098 +0000 UTC m=+0.742746508 container died 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:09:05 np0005539482 systemd[1]: var-lib-containers-storage-overlay-38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32-merged.mount: Deactivated successfully.
Nov 29 00:09:05 np0005539482 podman[82435]: 2025-11-29 05:09:05.044312066 +0000 UTC m=+0.781815466 container remove 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 00:09:05 np0005539482 systemd[1]: libpod-conmon-9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8.scope: Deactivated successfully.
Nov 29 00:09:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:05 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev c8c28ab8-34ab-456a-b367-92efd2bc7176 (Updating crash deployment (+1 -> 1))
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:05 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 00:09:05 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 00:09:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 00:09:05 np0005539482 python3[82915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:05 np0005539482 podman[82988]: 2025-11-29 05:09:05.520893539 +0000 UTC m=+0.034295136 container create 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:09:05 np0005539482 systemd[1]: Started libpod-conmon-844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be.scope.
Nov 29 00:09:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:05 np0005539482 podman[82988]: 2025-11-29 05:09:05.58367555 +0000 UTC m=+0.097077157 container init 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:09:05 np0005539482 podman[82988]: 2025-11-29 05:09:05.58957359 +0000 UTC m=+0.102975187 container start 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:09:05 np0005539482 podman[82988]: 2025-11-29 05:09:05.592710128 +0000 UTC m=+0.106111785 container attach 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:09:05 np0005539482 podman[82988]: 2025-11-29 05:09:05.507087385 +0000 UTC m=+0.020489002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:05 np0005539482 podman[83075]: 2025-11-29 05:09:05.90376958 +0000 UTC m=+0.055539192 container create 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:05 np0005539482 systemd[1]: Started libpod-conmon-703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e.scope.
Nov 29 00:09:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:05 np0005539482 podman[83075]: 2025-11-29 05:09:05.875254573 +0000 UTC m=+0.027024195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:05 np0005539482 podman[83075]: 2025-11-29 05:09:05.971054979 +0000 UTC m=+0.122824571 container init 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:05 np0005539482 podman[83075]: 2025-11-29 05:09:05.977187285 +0000 UTC m=+0.128956887 container start 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:05 np0005539482 jovial_goldstine[83102]: 167 167
Nov 29 00:09:05 np0005539482 podman[83075]: 2025-11-29 05:09:05.981083641 +0000 UTC m=+0.132853253 container attach 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:09:05 np0005539482 systemd[1]: libpod-703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e.scope: Deactivated successfully.
Nov 29 00:09:05 np0005539482 podman[83075]: 2025-11-29 05:09:05.982099343 +0000 UTC m=+0.133868925 container died 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:06 np0005539482 systemd[1]: var-lib-containers-storage-overlay-567260f5ca1ee9b4c183996d0f609b7d6ec837785c65bd0ff4372fd61953c72f-merged.mount: Deactivated successfully.
Nov 29 00:09:06 np0005539482 podman[83075]: 2025-11-29 05:09:06.023460933 +0000 UTC m=+0.175230545 container remove 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:09:06 np0005539482 systemd[1]: libpod-conmon-703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e.scope: Deactivated successfully.
Nov 29 00:09:06 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3938091443' entity='client.admin' 
Nov 29 00:09:06 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:06 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:06 np0005539482 podman[83170]: 2025-11-29 05:09:06.231679413 +0000 UTC m=+0.028436448 container died 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:09:06 np0005539482 systemd[1]: libpod-844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be.scope: Deactivated successfully.
Nov 29 00:09:06 np0005539482 systemd[1]: var-lib-containers-storage-overlay-01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59-merged.mount: Deactivated successfully.
Nov 29 00:09:06 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:06 np0005539482 podman[83170]: 2025-11-29 05:09:06.378520081 +0000 UTC m=+0.175277086 container remove 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: Deploying daemon crash.compute-0 on compute-0
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/3938091443' entity='client.admin' 
Nov 29 00:09:06 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:06 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:06 np0005539482 ansible-async_wrapper.py[81192]: Done in kid B.
Nov 29 00:09:06 np0005539482 systemd[1]: libpod-conmon-844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be.scope: Deactivated successfully.
Nov 29 00:09:06 np0005539482 systemd[1]: Starting Ceph crash.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:09:06 np0005539482 python3[83252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:06 np0005539482 podman[83290]: 2025-11-29 05:09:06.821167118 +0000 UTC m=+0.037843173 container create 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:06 np0005539482 podman[83309]: 2025-11-29 05:09:06.854028 +0000 UTC m=+0.038948208 container create 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 00:09:06 np0005539482 systemd[1]: Started libpod-conmon-8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc.scope.
Nov 29 00:09:06 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:06 np0005539482 podman[83290]: 2025-11-29 05:09:06.807127369 +0000 UTC m=+0.023803424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:06 np0005539482 podman[83290]: 2025-11-29 05:09:06.908973419 +0000 UTC m=+0.125649514 container init 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:09:06 np0005539482 podman[83309]: 2025-11-29 05:09:06.913500508 +0000 UTC m=+0.098420736 container init 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 00:09:06 np0005539482 podman[83290]: 2025-11-29 05:09:06.918981489 +0000 UTC m=+0.135657544 container start 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:06 np0005539482 podman[83290]: 2025-11-29 05:09:06.922378234 +0000 UTC m=+0.139054299 container attach 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:06 np0005539482 podman[83309]: 2025-11-29 05:09:06.922965426 +0000 UTC m=+0.107885644 container start 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:06 np0005539482 bash[83309]: 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f
Nov 29 00:09:06 np0005539482 podman[83309]: 2025-11-29 05:09:06.83857018 +0000 UTC m=+0.023490408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:06 np0005539482 systemd[1]: Started Ceph crash.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:06 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev c8c28ab8-34ab-456a-b367-92efd2bc7176 (Updating crash deployment (+1 -> 1))
Nov 29 00:09:06 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event c8c28ab8-34ab-456a-b367-92efd2bc7176 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 29 00:09:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:07 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7264640e-8713-4381-9094-d38af8c362b6 does not exist
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:07 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 34b6a833-63d3-45c9-995f-22c48b727833 (Updating mgr deployment (+1 -> 2))
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:07 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.hhpwsh on compute-0
Nov 29 00:09:07 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.hhpwsh on compute-0
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 00:09:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.284+0000 7fcee0c50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.284+0000 7fcee0c50640 -1 AuthRegistry(0x7fcedc066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.285+0000 7fcee0c50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.285+0000 7fcee0c50640 -1 AuthRegistry(0x7fcee0c4f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.286+0000 7fceda575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.286+0000 7fcee0c50640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 00:09:07 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 00:09:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1060990233' entity='client.admin' 
Nov 29 00:09:07 np0005539482 systemd[1]: libpod-8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc.scope: Deactivated successfully.
Nov 29 00:09:07 np0005539482 podman[83490]: 2025-11-29 05:09:07.527335609 +0000 UTC m=+0.030681165 container died 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:09:07 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957-merged.mount: Deactivated successfully.
Nov 29 00:09:07 np0005539482 podman[83490]: 2025-11-29 05:09:07.565295204 +0000 UTC m=+0.068640740 container remove 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:07 np0005539482 systemd[1]: libpod-conmon-8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc.scope: Deactivated successfully.
Nov 29 00:09:07 np0005539482 podman[83529]: 2025-11-29 05:09:07.615255253 +0000 UTC m=+0.037898254 container create f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:09:07 np0005539482 systemd[1]: Started libpod-conmon-f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e.scope.
Nov 29 00:09:07 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:07 np0005539482 podman[83529]: 2025-11-29 05:09:07.686669854 +0000 UTC m=+0.109312875 container init f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:09:07 np0005539482 podman[83529]: 2025-11-29 05:09:07.691991581 +0000 UTC m=+0.114634592 container start f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:09:07 np0005539482 podman[83529]: 2025-11-29 05:09:07.597306468 +0000 UTC m=+0.019949499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:07 np0005539482 elated_haibt[83546]: 167 167
Nov 29 00:09:07 np0005539482 podman[83529]: 2025-11-29 05:09:07.69514405 +0000 UTC m=+0.117787051 container attach f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:09:07 np0005539482 systemd[1]: libpod-f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e.scope: Deactivated successfully.
Nov 29 00:09:07 np0005539482 podman[83529]: 2025-11-29 05:09:07.698029794 +0000 UTC m=+0.120672805 container died f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:07 np0005539482 systemd[1]: var-lib-containers-storage-overlay-89759b840b21f40c5678935a0727a3885eba667973ad162da6f8eed3d31efab9-merged.mount: Deactivated successfully.
Nov 29 00:09:07 np0005539482 podman[83529]: 2025-11-29 05:09:07.733552885 +0000 UTC m=+0.156195886 container remove f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:09:07 np0005539482 systemd[1]: libpod-conmon-f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e.scope: Deactivated successfully.
Nov 29 00:09:07 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:07 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:07 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:08 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:08 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:08 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: Deploying daemon mgr.compute-0.hhpwsh on compute-0
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1060990233' entity='client.admin' 
Nov 29 00:09:08 np0005539482 python3[83626]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:08 np0005539482 podman[83665]: 2025-11-29 05:09:08.364184576 +0000 UTC m=+0.042071527 container create e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:08 np0005539482 systemd[1]: Started libpod-conmon-e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da.scope.
Nov 29 00:09:08 np0005539482 systemd[1]: Starting Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:09:08 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:08 np0005539482 podman[83665]: 2025-11-29 05:09:08.431813263 +0000 UTC m=+0.109700234 container init e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:08 np0005539482 podman[83665]: 2025-11-29 05:09:08.345574756 +0000 UTC m=+0.023461757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:08 np0005539482 podman[83665]: 2025-11-29 05:09:08.443682613 +0000 UTC m=+0.121569584 container start e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:08 np0005539482 podman[83665]: 2025-11-29 05:09:08.450592375 +0000 UTC m=+0.128479346 container attach e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:09:08 np0005539482 podman[83734]: 2025-11-29 05:09:08.592439556 +0000 UTC m=+0.036203148 container create dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/var/lib/ceph/mgr/ceph-compute-0.hhpwsh supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:08 np0005539482 podman[83734]: 2025-11-29 05:09:08.575465783 +0000 UTC m=+0.019229395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:08 np0005539482 podman[83734]: 2025-11-29 05:09:08.675924902 +0000 UTC m=+0.119688514 container init dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 00:09:08 np0005539482 podman[83734]: 2025-11-29 05:09:08.680574394 +0000 UTC m=+0.124337996 container start dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:08 np0005539482 bash[83734]: dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3
Nov 29 00:09:08 np0005539482 systemd[1]: Started Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:09:08 np0005539482 ceph-mgr[83753]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:09:08 np0005539482 ceph-mgr[83753]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 00:09:08 np0005539482 ceph-mgr[83753]: pidfile_write: ignore empty --pid-file
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 34b6a833-63d3-45c9-995f-22c48b727833 (Updating mgr deployment (+1 -> 2))
Nov 29 00:09:08 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 34b6a833-63d3-45c9-995f-22c48b727833 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 00:09:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:08 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'alerts'
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:09 np0005539482 ceph-mgr[83753]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 00:09:09 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'balancer'
Nov 29 00:09:09 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:09.148+0000 7f2f6405a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 00:09:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:09 np0005539482 ceph-mgr[83753]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 00:09:09 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'cephadm'
Nov 29 00:09:09 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:09.412+0000 7f2f6405a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 00:09:09 np0005539482 podman[84022]: 2025-11-29 05:09:09.570111939 +0000 UTC m=+0.046005133 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:09:09 np0005539482 podman[84022]: 2025-11-29 05:09:09.660663341 +0000 UTC m=+0.136556545 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 00:09:09 np0005539482 pensive_feynman[83682]: set require_min_compat_client to mimic
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 00:09:09 np0005539482 systemd[1]: libpod-e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da.scope: Deactivated successfully.
Nov 29 00:09:09 np0005539482 podman[83665]: 2025-11-29 05:09:09.788536233 +0000 UTC m=+1.466423184 container died e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 00:09:09 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4-merged.mount: Deactivated successfully.
Nov 29 00:09:09 np0005539482 podman[83665]: 2025-11-29 05:09:09.837256725 +0000 UTC m=+1.515143676 container remove e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:09:09 np0005539482 systemd[1]: libpod-conmon-e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da.scope: Deactivated successfully.
Nov 29 00:09:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0ebbf7a5-0723-4ea6-81f8-5f0c7cd40eb3 does not exist
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 123c56fd-4de1-4d12-a0f4-49efb944cb56 does not exist
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0f4d22fb-1a54-4dd4-9c44-303b731334cb does not exist
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 00:09:10 np0005539482 python3[84297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:10 np0005539482 podman[84303]: 2025-11-29 05:09:10.5126937 +0000 UTC m=+0.036873561 container create 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:09:10 np0005539482 systemd[1]: Started libpod-conmon-4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b.scope.
Nov 29 00:09:10 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:10 np0005539482 podman[84329]: 2025-11-29 05:09:10.590774128 +0000 UTC m=+0.043161841 container create 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 00:09:10 np0005539482 podman[84303]: 2025-11-29 05:09:10.496130356 +0000 UTC m=+0.020310237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:10 np0005539482 podman[84303]: 2025-11-29 05:09:10.596397922 +0000 UTC m=+0.120577803 container init 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:10 np0005539482 podman[84303]: 2025-11-29 05:09:10.603705742 +0000 UTC m=+0.127885603 container start 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:10 np0005539482 podman[84303]: 2025-11-29 05:09:10.608500248 +0000 UTC m=+0.132680129 container attach 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:10 np0005539482 systemd[1]: Started libpod-conmon-17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221.scope.
Nov 29 00:09:10 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:10 np0005539482 podman[84329]: 2025-11-29 05:09:10.573983818 +0000 UTC m=+0.026371551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:10 np0005539482 podman[84329]: 2025-11-29 05:09:10.676440932 +0000 UTC m=+0.128828665 container init 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:09:10 np0005539482 podman[84329]: 2025-11-29 05:09:10.681700938 +0000 UTC m=+0.134088661 container start 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:09:10 np0005539482 laughing_brahmagupta[84351]: 167 167
Nov 29 00:09:10 np0005539482 podman[84329]: 2025-11-29 05:09:10.68546798 +0000 UTC m=+0.137855713 container attach 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:10 np0005539482 systemd[1]: libpod-17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221.scope: Deactivated successfully.
Nov 29 00:09:10 np0005539482 podman[84329]: 2025-11-29 05:09:10.686047443 +0000 UTC m=+0.138435156 container died 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:10 np0005539482 systemd[1]: var-lib-containers-storage-overlay-01e4deca56bd8709fe3eec8b4c834483840e668f9a0f25ed2d0f2c2e7f480dbd-merged.mount: Deactivated successfully.
Nov 29 00:09:10 np0005539482 podman[84329]: 2025-11-29 05:09:10.735583602 +0000 UTC m=+0.187971325 container remove 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:09:10 np0005539482 systemd[1]: libpod-conmon-17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221.scope: Deactivated successfully.
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.csskcz (unknown last config time)...
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.csskcz (unknown last config time)...
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.csskcz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csskcz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.csskcz on compute-0
Nov 29 00:09:10 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.csskcz on compute-0
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:11 np0005539482 podman[84565]: 2025-11-29 05:09:11.312537912 +0000 UTC m=+0.051897283 container create 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [progress INFO root] Writing back 2 completed events
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 systemd[1]: Started libpod-conmon-3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1.scope.
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:09:11 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:11 np0005539482 podman[84565]: 2025-11-29 05:09:11.283697738 +0000 UTC m=+0.023057109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:11 np0005539482 podman[84565]: 2025-11-29 05:09:11.384545056 +0000 UTC m=+0.123904397 container init 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:11 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'crash'
Nov 29 00:09:11 np0005539482 podman[84565]: 2025-11-29 05:09:11.389171698 +0000 UTC m=+0.128531019 container start 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:09:11 np0005539482 podman[84565]: 2025-11-29 05:09:11.391925798 +0000 UTC m=+0.131285119 container attach 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:09:11 np0005539482 fervent_wilson[84629]: 167 167
Nov 29 00:09:11 np0005539482 systemd[1]: libpod-3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1.scope: Deactivated successfully.
Nov 29 00:09:11 np0005539482 podman[84639]: 2025-11-29 05:09:11.430455386 +0000 UTC m=+0.027235390 container died 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:11 np0005539482 systemd[1]: var-lib-containers-storage-overlay-29e148a15b676668f4765b347ae846124c7042b332300a81eb16be1d072c1c02-merged.mount: Deactivated successfully.
Nov 29 00:09:11 np0005539482 podman[84639]: 2025-11-29 05:09:11.471392546 +0000 UTC m=+0.068172510 container remove 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:09:11 np0005539482 systemd[1]: libpod-conmon-3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1.scope: Deactivated successfully.
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Added host compute-0
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 29 00:09:11 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 sweet_stonebraker[84339]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 00:09:11 np0005539482 sweet_stonebraker[84339]: Scheduled mon update...
Nov 29 00:09:11 np0005539482 sweet_stonebraker[84339]: Scheduled mgr update...
Nov 29 00:09:11 np0005539482 sweet_stonebraker[84339]: Scheduled osd.default_drive_group update...
Nov 29 00:09:11 np0005539482 ceph-mgr[83753]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 00:09:11 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'dashboard'
Nov 29 00:09:11 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:11.679+0000 7f2f6405a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 00:09:11 np0005539482 systemd[1]: libpod-4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b.scope: Deactivated successfully.
Nov 29 00:09:11 np0005539482 podman[84303]: 2025-11-29 05:09:11.684310009 +0000 UTC m=+1.208489870 container died 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:11 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f-merged.mount: Deactivated successfully.
Nov 29 00:09:11 np0005539482 podman[84303]: 2025-11-29 05:09:11.742672163 +0000 UTC m=+1.266852034 container remove 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 00:09:11 np0005539482 systemd[1]: libpod-conmon-4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b.scope: Deactivated successfully.
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: Reconfiguring mgr.compute-0.csskcz (unknown last config time)...
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csskcz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: Reconfiguring daemon mgr.compute-0.csskcz on compute-0
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:12 np0005539482 python3[84843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:12 np0005539482 podman[84873]: 2025-11-29 05:09:12.250964523 +0000 UTC m=+0.060322088 container create 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:12 np0005539482 systemd[1]: Started libpod-conmon-8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8.scope.
Nov 29 00:09:12 np0005539482 podman[84873]: 2025-11-29 05:09:12.226067705 +0000 UTC m=+0.035425320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:12 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:12 np0005539482 podman[84873]: 2025-11-29 05:09:12.375218045 +0000 UTC m=+0.184575600 container init 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:12 np0005539482 podman[84873]: 2025-11-29 05:09:12.388348904 +0000 UTC m=+0.197706429 container start 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:09:12 np0005539482 podman[84902]: 2025-11-29 05:09:12.389670913 +0000 UTC m=+0.091835201 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:12 np0005539482 podman[84873]: 2025-11-29 05:09:12.408461906 +0000 UTC m=+0.217819471 container attach 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:09:12 np0005539482 podman[84902]: 2025-11-29 05:09:12.490204444 +0000 UTC m=+0.192368692 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: Added host compute-0
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: Saving service mon spec with placement compute-0
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: Saving service mgr spec with placement compute-0
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: Saving service osd.default_drive_group spec with placement compute-0
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3124182859' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 00:09:12 np0005539482 hopeful_brattain[84909]: 
Nov 29 00:09:12 np0005539482 hopeful_brattain[84909]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T05:07:51.349368+0000","services":{}},"progress_events":{}}
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:12 np0005539482 systemd[1]: libpod-8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8.scope: Deactivated successfully.
Nov 29 00:09:12 np0005539482 podman[84873]: 2025-11-29 05:09:12.989662349 +0000 UTC m=+0.799019884 container died 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:13 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'devicehealth'
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:09:13 np0005539482 systemd[1]: var-lib-containers-storage-overlay-080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f-merged.mount: Deactivated successfully.
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:13 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d119416c-83f9-4b08-ba71-df4ca0804aea does not exist
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 00:09:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:13 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 1aa60e2d-3d9a-44de-86ca-53819f4dae3f (Updating mgr deployment (-1 -> 1))
Nov 29 00:09:13 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.hhpwsh from compute-0 -- ports [8765]
Nov 29 00:09:13 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.hhpwsh from compute-0 -- ports [8765]
Nov 29 00:09:13 np0005539482 podman[84873]: 2025-11-29 05:09:13.165014426 +0000 UTC m=+0.974371961 container remove 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:13 np0005539482 systemd[1]: libpod-conmon-8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8.scope: Deactivated successfully.
Nov 29 00:09:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:13 np0005539482 ceph-mgr[83753]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 00:09:13 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 00:09:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:13.363+0000 7f2f6405a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 00:09:13 np0005539482 systemd[1]: Stopping Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:09:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 00:09:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 00:09:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]:  from numpy import show_config as show_numpy_config
Nov 29 00:09:13 np0005539482 ceph-mgr[83753]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 00:09:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:13.886+0000 7f2f6405a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 00:09:13 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'influx'
Nov 29 00:09:14 np0005539482 ceph-mgr[83753]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 00:09:14 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:14.112+0000 7f2f6405a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 00:09:14 np0005539482 ceph-mgr[83753]: mgr[py] Loading python module 'insights'
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 podman[85196]: 2025-11-29 05:09:14.179652702 +0000 UTC m=+0.298499796 container died dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 00:09:14 np0005539482 systemd[1]: var-lib-containers-storage-overlay-94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e-merged.mount: Deactivated successfully.
Nov 29 00:09:14 np0005539482 podman[85196]: 2025-11-29 05:09:14.453254481 +0000 UTC m=+0.572101535 container remove dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:14 np0005539482 bash[85196]: ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh
Nov 29 00:09:14 np0005539482 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.hhpwsh.service: Main process exited, code=exited, status=143/n/a
Nov 29 00:09:14 np0005539482 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.hhpwsh.service: Failed with result 'exit-code'.
Nov 29 00:09:14 np0005539482 systemd[1]: Stopped Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:09:14 np0005539482 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.hhpwsh.service: Consumed 6.334s CPU time.
Nov 29 00:09:14 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:14 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:14 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:14 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.hhpwsh
Nov 29 00:09:14 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.hhpwsh
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"} v 0) v1
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]: dispatch
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]': finished
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 1aa60e2d-3d9a-44de-86ca-53819f4dae3f (Updating mgr deployment (-1 -> 1))
Nov 29 00:09:14 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 1aa60e2d-3d9a-44de-86ca-53819f4dae3f (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:14 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 73e4e3c7-b0b6-4dbe-b5eb-72e5e9dd87c8 does not exist
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:15 np0005539482 ceph-mon[75176]: Removing daemon mgr.compute-0.hhpwsh from compute-0 -- ports [8765]
Nov 29 00:09:15 np0005539482 ceph-mon[75176]: Removing key for mgr.compute-0.hhpwsh
Nov 29 00:09:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]: dispatch
Nov 29 00:09:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]': finished
Nov 29 00:09:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:09:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:15 np0005539482 podman[85437]: 2025-11-29 05:09:15.581976586 +0000 UTC m=+0.042386394 container create e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:15 np0005539482 systemd[1]: Started libpod-conmon-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope.
Nov 29 00:09:15 np0005539482 podman[85437]: 2025-11-29 05:09:15.562139439 +0000 UTC m=+0.022549207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:15 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:15 np0005539482 podman[85437]: 2025-11-29 05:09:15.678493499 +0000 UTC m=+0.138903307 container init e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:09:15 np0005539482 podman[85437]: 2025-11-29 05:09:15.685145474 +0000 UTC m=+0.145555262 container start e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:15 np0005539482 podman[85437]: 2025-11-29 05:09:15.688493879 +0000 UTC m=+0.148903657 container attach e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:15 np0005539482 wizardly_sanderson[85453]: 167 167
Nov 29 00:09:15 np0005539482 systemd[1]: libpod-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope: Deactivated successfully.
Nov 29 00:09:15 np0005539482 conmon[85453]: conmon e873b3ea4b85a531d2a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope/container/memory.events
Nov 29 00:09:15 np0005539482 podman[85437]: 2025-11-29 05:09:15.692396115 +0000 UTC m=+0.152805913 container died e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:15 np0005539482 systemd[1]: var-lib-containers-storage-overlay-888e7c58f974391c9e1c07898db2a516076851a79ca0ee1d35907f3963f7018a-merged.mount: Deactivated successfully.
Nov 29 00:09:15 np0005539482 podman[85437]: 2025-11-29 05:09:15.733795625 +0000 UTC m=+0.194205393 container remove e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:15 np0005539482 systemd[1]: libpod-conmon-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope: Deactivated successfully.
Nov 29 00:09:15 np0005539482 podman[85477]: 2025-11-29 05:09:15.904606972 +0000 UTC m=+0.047517627 container create d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:15 np0005539482 systemd[1]: Started libpod-conmon-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope.
Nov 29 00:09:15 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:15 np0005539482 podman[85477]: 2025-11-29 05:09:15.885771378 +0000 UTC m=+0.028682013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:15 np0005539482 podman[85477]: 2025-11-29 05:09:15.981863131 +0000 UTC m=+0.124773766 container init d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:09:15 np0005539482 podman[85477]: 2025-11-29 05:09:15.992650228 +0000 UTC m=+0.135560843 container start d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:09:15 np0005539482 podman[85477]: 2025-11-29 05:09:15.995855738 +0000 UTC m=+0.138766353 container attach d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:16 np0005539482 ceph-mgr[75473]: [progress INFO root] Writing back 3 completed events
Nov 29 00:09:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 00:09:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: --> relative data size: 1.0
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3cc3f442-c807-4e2a-868e-a4aae87af231
Nov 29 00:09:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"} v 0) v1
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]: dispatch
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]': finished
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:17 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 00:09:17 np0005539482 lvm[85554]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 00:09:17 np0005539482 lvm[85554]: VG ceph_vg0 finished
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:17 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 29 00:09:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 00:09:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770386242' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 00:09:18 np0005539482 elated_leakey[85493]: stderr: got monmap epoch 1
Nov 29 00:09:18 np0005539482 elated_leakey[85493]: --> Creating keyring file for osd.0
Nov 29 00:09:18 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]: dispatch
Nov 29 00:09:18 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]': finished
Nov 29 00:09:18 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 29 00:09:18 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 29 00:09:18 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 3cc3f442-c807-4e2a-868e-a4aae87af231 --setuser ceph --setgroup ceph
Nov 29 00:09:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:19 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 00:09:19 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 00:09:19 np0005539482 ceph-mon[75176]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 00:09:19 np0005539482 ceph-mon[75176]: Cluster is now healthy
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 00:09:20 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9801566-0c31-4202-a669-811037218c27
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"} v 0) v1
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]: dispatch
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]': finished
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:21 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:21 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]: dispatch
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]': finished
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 00:09:21 np0005539482 lvm[86493]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 00:09:21 np0005539482 lvm[86493]: VG ceph_vg1 finished
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 00:09:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668964783' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: stderr: got monmap epoch 1
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: --> Creating keyring file for osd.1
Nov 29 00:09:21 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 00:09:22 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 00:09:22 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid b9801566-0c31-4202-a669-811037218c27 --setuser ceph --setgroup ceph
Nov 29 00:09:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 00:09:25 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new eec69945-b157-41e1-8fba-3992c2dca958
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"} v 0) v1
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]: dispatch
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]': finished
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:25 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:25 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:25 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:26 np0005539482 lvm[87427]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 00:09:26 np0005539482 lvm[87427]: VG ceph_vg2 finished
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 29 00:09:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 00:09:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/234793256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: stderr: got monmap epoch 1
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: --> Creating keyring file for osd.2
Nov 29 00:09:26 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]: dispatch
Nov 29 00:09:26 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]': finished
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 29 00:09:26 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid eec69945-b157-41e1-8fba-3992c2dca958 --setuser ceph --setgroup ceph
Nov 29 00:09:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:26.761+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:26.761+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:26.762+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: stderr: 2025-11-29T05:09:26.762+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 29 00:09:29 np0005539482 elated_leakey[85493]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 29 00:09:29 np0005539482 systemd[1]: libpod-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope: Deactivated successfully.
Nov 29 00:09:29 np0005539482 systemd[1]: libpod-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope: Consumed 6.664s CPU time.
Nov 29 00:09:29 np0005539482 podman[88336]: 2025-11-29 05:09:29.899419634 +0000 UTC m=+0.022936848 container died d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:09:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27-merged.mount: Deactivated successfully.
Nov 29 00:09:29 np0005539482 podman[88336]: 2025-11-29 05:09:29.956904612 +0000 UTC m=+0.080421836 container remove d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:09:29 np0005539482 systemd[1]: libpod-conmon-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope: Deactivated successfully.
Nov 29 00:09:30 np0005539482 podman[88491]: 2025-11-29 05:09:30.56242659 +0000 UTC m=+0.045339943 container create fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:09:30 np0005539482 systemd[1]: Started libpod-conmon-fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee.scope.
Nov 29 00:09:30 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:30 np0005539482 podman[88491]: 2025-11-29 05:09:30.541189164 +0000 UTC m=+0.024102567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:30 np0005539482 podman[88491]: 2025-11-29 05:09:30.638765706 +0000 UTC m=+0.121679069 container init fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:30 np0005539482 podman[88491]: 2025-11-29 05:09:30.645783307 +0000 UTC m=+0.128696640 container start fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:09:30 np0005539482 podman[88491]: 2025-11-29 05:09:30.64874835 +0000 UTC m=+0.131661723 container attach fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:30 np0005539482 tender_lichterman[88507]: 167 167
Nov 29 00:09:30 np0005539482 systemd[1]: libpod-fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee.scope: Deactivated successfully.
Nov 29 00:09:30 np0005539482 podman[88491]: 2025-11-29 05:09:30.650702497 +0000 UTC m=+0.133615880 container died fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:09:30 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4f7e1dc1ab94b21ea213afead605a2df305d5b1d730c3bdfc3b22576444e2ab3-merged.mount: Deactivated successfully.
Nov 29 00:09:30 np0005539482 podman[88491]: 2025-11-29 05:09:30.689041009 +0000 UTC m=+0.171954332 container remove fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:09:30 np0005539482 systemd[1]: libpod-conmon-fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee.scope: Deactivated successfully.
Nov 29 00:09:30 np0005539482 podman[88530]: 2025-11-29 05:09:30.83704495 +0000 UTC m=+0.034322046 container create b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:09:30 np0005539482 systemd[1]: Started libpod-conmon-b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b.scope.
Nov 29 00:09:30 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:30 np0005539482 podman[88530]: 2025-11-29 05:09:30.912593307 +0000 UTC m=+0.109870423 container init b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:30 np0005539482 podman[88530]: 2025-11-29 05:09:30.823022598 +0000 UTC m=+0.020299714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:30 np0005539482 podman[88530]: 2025-11-29 05:09:30.920813877 +0000 UTC m=+0.118090973 container start b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:09:30 np0005539482 podman[88530]: 2025-11-29 05:09:30.923699067 +0000 UTC m=+0.120976163 container attach b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:09:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]: {
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:    "0": [
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:        {
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "devices": [
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "/dev/loop3"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            ],
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_name": "ceph_lv0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_size": "21470642176",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "name": "ceph_lv0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "tags": {
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cluster_name": "ceph",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.crush_device_class": "",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.encrypted": "0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osd_id": "0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.type": "block",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.vdo": "0"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            },
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "type": "block",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "vg_name": "ceph_vg0"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:        }
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:    ],
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:    "1": [
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:        {
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "devices": [
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "/dev/loop4"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            ],
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_name": "ceph_lv1",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_size": "21470642176",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "name": "ceph_lv1",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "tags": {
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cluster_name": "ceph",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.crush_device_class": "",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.encrypted": "0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osd_id": "1",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.type": "block",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.vdo": "0"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            },
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "type": "block",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "vg_name": "ceph_vg1"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:        }
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:    ],
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:    "2": [
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:        {
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "devices": [
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "/dev/loop5"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            ],
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_name": "ceph_lv2",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_size": "21470642176",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "name": "ceph_lv2",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "tags": {
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.cluster_name": "ceph",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.crush_device_class": "",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.encrypted": "0",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osd_id": "2",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.type": "block",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:                "ceph.vdo": "0"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            },
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "type": "block",
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:            "vg_name": "ceph_vg2"
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:        }
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]:    ]
Nov 29 00:09:31 np0005539482 inspiring_maxwell[88547]: }
Nov 29 00:09:31 np0005539482 systemd[1]: libpod-b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b.scope: Deactivated successfully.
Nov 29 00:09:31 np0005539482 podman[88530]: 2025-11-29 05:09:31.68446361 +0000 UTC m=+0.881740726 container died b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:09:31 np0005539482 systemd[1]: var-lib-containers-storage-overlay-726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f-merged.mount: Deactivated successfully.
Nov 29 00:09:31 np0005539482 podman[88530]: 2025-11-29 05:09:31.74815764 +0000 UTC m=+0.945434746 container remove b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:31 np0005539482 systemd[1]: libpod-conmon-b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b.scope: Deactivated successfully.
Nov 29 00:09:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 00:09:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 00:09:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:31 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 29 00:09:31 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 29 00:09:32 np0005539482 podman[88711]: 2025-11-29 05:09:32.421804754 +0000 UTC m=+0.054886415 container create 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 00:09:32 np0005539482 systemd[1]: Started libpod-conmon-328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c.scope.
Nov 29 00:09:32 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:32 np0005539482 podman[88711]: 2025-11-29 05:09:32.394810808 +0000 UTC m=+0.027892559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:32 np0005539482 podman[88711]: 2025-11-29 05:09:32.493128929 +0000 UTC m=+0.126210670 container init 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:32 np0005539482 podman[88711]: 2025-11-29 05:09:32.498629773 +0000 UTC m=+0.131711464 container start 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:32 np0005539482 hopeful_visvesvaraya[88727]: 167 167
Nov 29 00:09:32 np0005539482 systemd[1]: libpod-328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c.scope: Deactivated successfully.
Nov 29 00:09:32 np0005539482 podman[88711]: 2025-11-29 05:09:32.503356598 +0000 UTC m=+0.136438299 container attach 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:32 np0005539482 podman[88711]: 2025-11-29 05:09:32.504113136 +0000 UTC m=+0.137194837 container died 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:09:32 np0005539482 systemd[1]: var-lib-containers-storage-overlay-22468dff33d3adc570d9d2783d54f56121d7486c53d08cb3dbceec1506a158d8-merged.mount: Deactivated successfully.
Nov 29 00:09:32 np0005539482 podman[88711]: 2025-11-29 05:09:32.552419071 +0000 UTC m=+0.185500732 container remove 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:09:32 np0005539482 systemd[1]: libpod-conmon-328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c.scope: Deactivated successfully.
Nov 29 00:09:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 00:09:32 np0005539482 ceph-mon[75176]: Deploying daemon osd.0 on compute-0
Nov 29 00:09:32 np0005539482 podman[88758]: 2025-11-29 05:09:32.783850139 +0000 UTC m=+0.041987091 container create 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:32 np0005539482 systemd[1]: Started libpod-conmon-6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092.scope.
Nov 29 00:09:32 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:32 np0005539482 podman[88758]: 2025-11-29 05:09:32.763631618 +0000 UTC m=+0.021768580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:32 np0005539482 podman[88758]: 2025-11-29 05:09:32.866673715 +0000 UTC m=+0.124810677 container init 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:32 np0005539482 podman[88758]: 2025-11-29 05:09:32.879509457 +0000 UTC m=+0.137646409 container start 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:09:32 np0005539482 podman[88758]: 2025-11-29 05:09:32.882909379 +0000 UTC m=+0.141046331 container attach 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:09:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test[88772]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 00:09:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test[88772]:                            [--no-systemd] [--no-tmpfs]
Nov 29 00:09:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test[88772]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 00:09:33 np0005539482 systemd[1]: libpod-6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092.scope: Deactivated successfully.
Nov 29 00:09:33 np0005539482 podman[88758]: 2025-11-29 05:09:33.544663595 +0000 UTC m=+0.802800547 container died 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:33 np0005539482 systemd[1]: var-lib-containers-storage-overlay-84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c-merged.mount: Deactivated successfully.
Nov 29 00:09:33 np0005539482 podman[88758]: 2025-11-29 05:09:33.602083781 +0000 UTC m=+0.860220733 container remove 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:33 np0005539482 systemd[1]: libpod-conmon-6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092.scope: Deactivated successfully.
Nov 29 00:09:33 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:33 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:33 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:34 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:34 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:34 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:34 np0005539482 systemd[1]: Starting Ceph osd.0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:09:34 np0005539482 podman[88934]: 2025-11-29 05:09:34.763637772 +0000 UTC m=+0.047493555 container create bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:34 np0005539482 podman[88934]: 2025-11-29 05:09:34.741904074 +0000 UTC m=+0.025759867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:34 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:34 np0005539482 podman[88934]: 2025-11-29 05:09:34.875674438 +0000 UTC m=+0.159530241 container init bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:34 np0005539482 podman[88934]: 2025-11-29 05:09:34.886708445 +0000 UTC m=+0.170564218 container start bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:34 np0005539482 podman[88934]: 2025-11-29 05:09:34.890876897 +0000 UTC m=+0.174732710 container attach bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:09:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 00:09:35 np0005539482 bash[88934]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 00:09:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 00:09:35 np0005539482 bash[88934]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 00:09:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 00:09:35 np0005539482 bash[88934]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 00:09:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 00:09:35 np0005539482 bash[88934]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 00:09:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:35 np0005539482 bash[88934]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 00:09:35 np0005539482 bash[88934]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 00:09:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 00:09:35 np0005539482 bash[88934]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 00:09:36 np0005539482 systemd[1]: libpod-bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84.scope: Deactivated successfully.
Nov 29 00:09:36 np0005539482 systemd[1]: libpod-bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84.scope: Consumed 1.131s CPU time.
Nov 29 00:09:36 np0005539482 podman[89075]: 2025-11-29 05:09:36.033806986 +0000 UTC m=+0.021514195 container died bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:09:36 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce-merged.mount: Deactivated successfully.
Nov 29 00:09:36 np0005539482 podman[89075]: 2025-11-29 05:09:36.083579236 +0000 UTC m=+0.071286415 container remove bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:09:36 np0005539482 podman[89134]: 2025-11-29 05:09:36.354067415 +0000 UTC m=+0.072028273 container create a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:09:36 np0005539482 podman[89134]: 2025-11-29 05:09:36.314617405 +0000 UTC m=+0.032578313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:36 np0005539482 podman[89134]: 2025-11-29 05:09:36.449502117 +0000 UTC m=+0.167462965 container init a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:36 np0005539482 podman[89134]: 2025-11-29 05:09:36.459088389 +0000 UTC m=+0.177049217 container start a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:36 np0005539482 bash[89134]: a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59
Nov 29 00:09:36 np0005539482 systemd[1]: Started Ceph osd.0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: pidfile_write: ignore empty --pid-file
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:36 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 29 00:09:36 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:36 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 00:09:36 np0005539482 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: load: jerasure load: lrc 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 00:09:37 np0005539482 podman[89313]: 2025-11-29 05:09:37.20493998 +0000 UTC m=+0.039552853 container create 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:09:37 np0005539482 systemd[1]: Started libpod-conmon-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope.
Nov 29 00:09:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:37 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:37 np0005539482 podman[89313]: 2025-11-29 05:09:37.188360617 +0000 UTC m=+0.022973510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:37 np0005539482 podman[89313]: 2025-11-29 05:09:37.290641504 +0000 UTC m=+0.125254407 container init 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 29 00:09:37 np0005539482 podman[89313]: 2025-11-29 05:09:37.298973647 +0000 UTC m=+0.133586520 container start 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:37 np0005539482 podman[89313]: 2025-11-29 05:09:37.302865362 +0000 UTC m=+0.137478255 container attach 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:09:37 np0005539482 practical_cartwright[89329]: 167 167
Nov 29 00:09:37 np0005539482 systemd[1]: libpod-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope: Deactivated successfully.
Nov 29 00:09:37 np0005539482 conmon[89329]: conmon 25db41e04cd8f87d1a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope/container/memory.events
Nov 29 00:09:37 np0005539482 podman[89313]: 2025-11-29 05:09:37.306435489 +0000 UTC m=+0.141048362 container died 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 00:09:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-23f65bd79d9b9839a322df7d8593c915db149e6bc5532f94a4924bbfb08be18b-merged.mount: Deactivated successfully.
Nov 29 00:09:37 np0005539482 podman[89313]: 2025-11-29 05:09:37.356639749 +0000 UTC m=+0.191252622 container remove 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 00:09:37 np0005539482 systemd[1]: libpod-conmon-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope: Deactivated successfully.
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs mount
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs mount shared_bdev_used = 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Git sha 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: DB SUMMARY
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: DB Session ID:  GB8E2MAM6AAV9M8FEZQI
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                     Options.env: 0x55c4e686d2d0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                Options.info_log: 0x55c4e5a6a8a0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.write_buffer_manager: 0x55c4e6976460
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.row_cache: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                              Options.wal_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.wal_compression: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_background_jobs: 4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Compression algorithms supported:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kZSTD supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a57090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a57090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a57090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ae016f9e-706d-4aae-a4b3-9ea8654bd733
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977614015, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977614300, "job": 1, "event": "recovery_finished"}
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: freelist init
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: freelist _read_cfg
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs umount
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 00:09:37 np0005539482 podman[89379]: 2025-11-29 05:09:37.662818036 +0000 UTC m=+0.052913078 container create ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 00:09:37 np0005539482 systemd[1]: Started libpod-conmon-ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2.scope.
Nov 29 00:09:37 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:37 np0005539482 podman[89379]: 2025-11-29 05:09:37.632245943 +0000 UTC m=+0.022340965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:37 np0005539482 podman[89379]: 2025-11-29 05:09:37.741681794 +0000 UTC m=+0.131776786 container init ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:09:37 np0005539482 podman[89379]: 2025-11-29 05:09:37.758686629 +0000 UTC m=+0.148781621 container start ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:37 np0005539482 podman[89379]: 2025-11-29 05:09:37.761562149 +0000 UTC m=+0.151657141 container attach ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:09:37 np0005539482 ceph-mon[75176]: Deploying daemon osd.1 on compute-0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs mount
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluefs mount shared_bdev_used = 4718592
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Git sha 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: DB SUMMARY
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: DB Session ID:  GB8E2MAM6AAV9M8FEZQJ
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                     Options.env: 0x55c4e5bbf8f0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                Options.info_log: 0x55c4e5a61180
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.write_buffer_manager: 0x55c4e69766e0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.row_cache: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                              Options.wal_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.wal_compression: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_background_jobs: 4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Compression algorithms supported:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kZSTD supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a571f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a57090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a57090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e5a57090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ae016f9e-706d-4aae-a4b3-9ea8654bd733
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977907666, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977912360, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ae016f9e-706d-4aae-a4b3-9ea8654bd733", "db_session_id": "GB8E2MAM6AAV9M8FEZQJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977915714, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ae016f9e-706d-4aae-a4b3-9ea8654bd733", "db_session_id": "GB8E2MAM6AAV9M8FEZQJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977918686, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ae016f9e-706d-4aae-a4b3-9ea8654bd733", "db_session_id": "GB8E2MAM6AAV9M8FEZQJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977920422, "job": 1, "event": "recovery_finished"}
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c4e5bc5c00
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: DB pointer 0x55c4e695fa00
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usag
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: _get_class not permitted to load lua
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: _get_class not permitted to load sdk
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: _get_class not permitted to load test_remote_reads
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0 0 load_pgs
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0 0 load_pgs opened 0 pgs
Nov 29 00:09:37 np0005539482 ceph-osd[89151]: osd.0 0 log_to_monitors true
Nov 29 00:09:37 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0[89147]: 2025-11-29T05:09:37.950+0000 7fc8efb21740 -1 osd.0 0 log_to_monitors true
Nov 29 00:09:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 00:09:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 00:09:38 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test[89575]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 00:09:38 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test[89575]:                            [--no-systemd] [--no-tmpfs]
Nov 29 00:09:38 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test[89575]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 00:09:38 np0005539482 systemd[1]: libpod-ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2.scope: Deactivated successfully.
Nov 29 00:09:38 np0005539482 podman[89379]: 2025-11-29 05:09:38.377239192 +0000 UTC m=+0.767334284 container died ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:09:38 np0005539482 systemd[1]: var-lib-containers-storage-overlay-950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434-merged.mount: Deactivated successfully.
Nov 29 00:09:38 np0005539482 podman[89379]: 2025-11-29 05:09:38.44617507 +0000 UTC m=+0.836270062 container remove ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:09:38 np0005539482 systemd[1]: libpod-conmon-ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2.scope: Deactivated successfully.
Nov 29 00:09:38 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:38 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:38 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:38 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:38 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:38 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:38 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 00:09:38 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 00:09:39 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:39 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:39 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:39 np0005539482 systemd[1]: Starting Ceph osd.1 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:09:39 np0005539482 podman[89953]: 2025-11-29 05:09:39.502981793 +0000 UTC m=+0.052648311 container create 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:09:39 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:39 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:39 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:39 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:39 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:39 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:39 np0005539482 podman[89953]: 2025-11-29 05:09:39.480357883 +0000 UTC m=+0.030024431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:39 np0005539482 podman[89953]: 2025-11-29 05:09:39.582784524 +0000 UTC m=+0.132451072 container init 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:09:39 np0005539482 podman[89953]: 2025-11-29 05:09:39.594507569 +0000 UTC m=+0.144174077 container start 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:09:39 np0005539482 podman[89953]: 2025-11-29 05:09:39.597654076 +0000 UTC m=+0.147320634 container attach 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 29 00:09:39 np0005539482 ceph-osd[89151]: osd.0 0 done with init, starting boot process
Nov 29 00:09:39 np0005539482 ceph-osd[89151]: osd.0 0 start_boot
Nov 29 00:09:39 np0005539482 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 00:09:39 np0005539482 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 00:09:39 np0005539482 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 00:09:39 np0005539482 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 00:09:39 np0005539482 ceph-osd[89151]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:39 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:39 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:39 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:39 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:39 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 00:09:39 np0005539482 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 00:09:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 00:09:40 np0005539482 bash[89953]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 00:09:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 00:09:40 np0005539482 bash[89953]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 00:09:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 00:09:40 np0005539482 bash[89953]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 00:09:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 00:09:40 np0005539482 bash[89953]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 00:09:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:40 np0005539482 bash[89953]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 00:09:40 np0005539482 bash[89953]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 00:09:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 00:09:40 np0005539482 bash[89953]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 00:09:40 np0005539482 systemd[1]: libpod-6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448.scope: Deactivated successfully.
Nov 29 00:09:40 np0005539482 podman[89953]: 2025-11-29 05:09:40.611150736 +0000 UTC m=+1.160817234 container died 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:09:40 np0005539482 systemd[1]: libpod-6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448.scope: Consumed 1.030s CPU time.
Nov 29 00:09:40 np0005539482 systemd[1]: var-lib-containers-storage-overlay-960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba-merged.mount: Deactivated successfully.
Nov 29 00:09:40 np0005539482 podman[89953]: 2025-11-29 05:09:40.752509725 +0000 UTC m=+1.302176273 container remove 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:40 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 00:09:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:40 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:40 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:40 np0005539482 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 00:09:40 np0005539482 podman[90161]: 2025-11-29 05:09:40.956584728 +0000 UTC m=+0.054755763 container create 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:09:41 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:41 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:41 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:41 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:41 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:41 np0005539482 podman[90161]: 2025-11-29 05:09:40.922381186 +0000 UTC m=+0.020552231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:41 np0005539482 podman[90161]: 2025-11-29 05:09:41.044878466 +0000 UTC m=+0.143049521 container init 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:09:41 np0005539482 podman[90161]: 2025-11-29 05:09:41.051607439 +0000 UTC m=+0.149778464 container start 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:09:41 np0005539482 bash[90161]: 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e
Nov 29 00:09:41 np0005539482 systemd[1]: Started Ceph osd.1 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: pidfile_write: ignore empty --pid-file
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:09:41
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] No pools available
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: load: jerasure load: lrc 
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:41 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:41 np0005539482 podman[90341]: 2025-11-29 05:09:41.86039418 +0000 UTC m=+0.051405271 container create 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:41 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 00:09:41 np0005539482 systemd[1]: Started libpod-conmon-19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9.scope.
Nov 29 00:09:41 np0005539482 podman[90341]: 2025-11-29 05:09:41.833167529 +0000 UTC m=+0.024178700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:41 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:41 np0005539482 podman[90341]: 2025-11-29 05:09:41.965823254 +0000 UTC m=+0.156834365 container init 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:09:41 np0005539482 podman[90341]: 2025-11-29 05:09:41.974083785 +0000 UTC m=+0.165094876 container start 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:41 np0005539482 epic_chebyshev[90361]: 167 167
Nov 29 00:09:41 np0005539482 systemd[1]: libpod-19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9.scope: Deactivated successfully.
Nov 29 00:09:41 np0005539482 podman[90341]: 2025-11-29 05:09:41.989976583 +0000 UTC m=+0.180987674 container attach 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:09:41 np0005539482 podman[90341]: 2025-11-29 05:09:41.990605237 +0000 UTC m=+0.181616328 container died 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 00:09:42 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b819f80dbab16a28f49433139465e4f05ae821c6cb83d4c113f3bf532e1e53a7-merged.mount: Deactivated successfully.
Nov 29 00:09:42 np0005539482 podman[90341]: 2025-11-29 05:09:42.08238485 +0000 UTC m=+0.273395981 container remove 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:42 np0005539482 systemd[1]: libpod-conmon-19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9.scope: Deactivated successfully.
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs mount
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs mount shared_bdev_used = 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Git sha 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: DB SUMMARY
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: DB Session ID:  Y4COQFGEX2AH8MDYLW2D
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                     Options.env: 0x559096765c70
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                Options.info_log: 0x5590959608a0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.write_buffer_manager: 0x559096876460
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.row_cache: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                              Options.wal_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.wal_compression: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_background_jobs: 4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Compression algorithms supported:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kZSTD supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 38520058-5321-4c20-b65e-18ccdc165478
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982183723, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982183904, "job": 1, "event": "recovery_finished"}
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: freelist init
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: freelist _read_cfg
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs umount
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 00:09:42 np0005539482 podman[90587]: 2025-11-29 05:09:42.386487016 +0000 UTC m=+0.069336358 container create d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:42 np0005539482 podman[90587]: 2025-11-29 05:09:42.342953277 +0000 UTC m=+0.025802489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs mount
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluefs mount shared_bdev_used = 4718592
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Git sha 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: DB SUMMARY
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: DB Session ID:  Y4COQFGEX2AH8MDYLW2C
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                     Options.env: 0x55909691e460
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                Options.info_log: 0x559095960600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.write_buffer_manager: 0x559096876460
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.row_cache: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                              Options.wal_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.wal_compression: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_background_jobs: 4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Compression algorithms supported:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kZSTD supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 systemd[1]: Started libpod-conmon-d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3.scope.
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55909594d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 38520058-5321-4c20-b65e-18ccdc165478
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982468841, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982478588, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392982, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "38520058-5321-4c20-b65e-18ccdc165478", "db_session_id": "Y4COQFGEX2AH8MDYLW2C", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982523587, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392982, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "38520058-5321-4c20-b65e-18ccdc165478", "db_session_id": "Y4COQFGEX2AH8MDYLW2C", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:42 np0005539482 podman[90587]: 2025-11-29 05:09:42.524520413 +0000 UTC m=+0.207369595 container init d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982529568, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392982, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "38520058-5321-4c20-b65e-18ccdc165478", "db_session_id": "Y4COQFGEX2AH8MDYLW2C", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:42 np0005539482 podman[90587]: 2025-11-29 05:09:42.534616819 +0000 UTC m=+0.217466001 container start d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982552448, "job": 1, "event": "recovery_finished"}
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 00:09:42 np0005539482 podman[90587]: 2025-11-29 05:09:42.55601959 +0000 UTC m=+0.238868762 container attach d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559095aba000
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: DB pointer 0x55909685fa00
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: _get_class not permitted to load lua
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: _get_class not permitted to load sdk
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: _get_class not permitted to load test_remote_reads
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1 0 load_pgs
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1 0 load_pgs opened 0 pgs
Nov 29 00:09:42 np0005539482 ceph-osd[90181]: osd.1 0 log_to_monitors true
Nov 29 00:09:42 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1[90177]: 2025-11-29T05:09:42.628+0000 7f1f71507740 -1 osd.1 0 log_to_monitors true
Nov 29 00:09:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 00:09:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 00:09:42 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 00:09:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:42 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:42 np0005539482 ceph-mon[75176]: Deploying daemon osd.2 on compute-0
Nov 29 00:09:42 np0005539482 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 00:09:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test[90711]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 00:09:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test[90711]:                            [--no-systemd] [--no-tmpfs]
Nov 29 00:09:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test[90711]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 00:09:43 np0005539482 systemd[1]: libpod-d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3.scope: Deactivated successfully.
Nov 29 00:09:43 np0005539482 podman[90587]: 2025-11-29 05:09:43.166217761 +0000 UTC m=+0.849067033 container died d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:43 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:43 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:43 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:43 np0005539482 systemd[1]: var-lib-containers-storage-overlay-924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c-merged.mount: Deactivated successfully.
Nov 29 00:09:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 00:09:43 np0005539482 podman[90587]: 2025-11-29 05:09:43.28048374 +0000 UTC m=+0.963332912 container remove d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:09:43 np0005539482 systemd[1]: libpod-conmon-d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3.scope: Deactivated successfully.
Nov 29 00:09:43 np0005539482 python3[90863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.252 iops: 6208.455 elapsed_sec: 0.483
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: log_channel(cluster) log [WRN] : OSD bench result of 6208.454805 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 0 waiting for initial osdmap
Nov 29 00:09:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0[89147]: 2025-11-29T05:09:43.492+0000 7fc8ec2b8640 -1 osd.0 0 waiting for initial osdmap
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 00:09:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0[89147]: 2025-11-29T05:09:43.517+0000 7fc8e70c9640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 9 set_numa_affinity not setting numa affinity
Nov 29 00:09:43 np0005539482 ceph-osd[89151]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 00:09:43 np0005539482 podman[90871]: 2025-11-29 05:09:43.531597728 +0000 UTC m=+0.055368788 container create b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:43 np0005539482 systemd[1]: Started libpod-conmon-b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865.scope.
Nov 29 00:09:43 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:43 np0005539482 podman[90871]: 2025-11-29 05:09:43.50661389 +0000 UTC m=+0.030385040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:43 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 00:09:43 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 00:09:43 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:43 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:43 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:43 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 00:09:43 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:43 np0005539482 podman[90871]: 2025-11-29 05:09:43.952821553 +0000 UTC m=+0.476592683 container init b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 00:09:43 np0005539482 podman[90871]: 2025-11-29 05:09:43.967606582 +0000 UTC m=+0.491377652 container start b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:09:43 np0005539482 podman[90871]: 2025-11-29 05:09:43.97206887 +0000 UTC m=+0.495840020 container attach b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:43 np0005539482 systemd[1]: Reloading.
Nov 29 00:09:44 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:09:44 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 29 00:09:44 np0005539482 ceph-osd[90181]: osd.1 0 done with init, starting boot process
Nov 29 00:09:44 np0005539482 ceph-osd[90181]: osd.1 0 start_boot
Nov 29 00:09:44 np0005539482 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 00:09:44 np0005539482 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 00:09:44 np0005539482 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 00:09:44 np0005539482 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 00:09:44 np0005539482 ceph-osd[90181]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554] boot
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:44 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:44 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:44 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:44 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: OSD bench result of 6208.454805 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 00:09:44 np0005539482 ceph-osd[89151]: osd.0 10 state: booting -> active
Nov 29 00:09:44 np0005539482 systemd[1]: Starting Ceph osd.2 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:09:44 np0005539482 podman[91047]: 2025-11-29 05:09:44.565906714 +0000 UTC m=+0.058059573 container create 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 00:09:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715470949' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 00:09:44 np0005539482 nice_merkle[90898]: 
Nov 29 00:09:44 np0005539482 nice_merkle[90898]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":10,"num_osds":3,"num_up_osds":1,"osd_up_since":1764392984,"num_in_osds":3,"osd_in_since":1764392965,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T05:09:43.260960+0000","services":{}},"progress_events":{}}
Nov 29 00:09:44 np0005539482 systemd[1]: libpod-b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865.scope: Deactivated successfully.
Nov 29 00:09:44 np0005539482 podman[90871]: 2025-11-29 05:09:44.617291024 +0000 UTC m=+1.141062084 container died b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:44 np0005539482 podman[91047]: 2025-11-29 05:09:44.538657591 +0000 UTC m=+0.030810530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:44 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:44 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235-merged.mount: Deactivated successfully.
Nov 29 00:09:44 np0005539482 podman[91047]: 2025-11-29 05:09:44.694811589 +0000 UTC m=+0.186964478 container init 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:09:44 np0005539482 podman[91047]: 2025-11-29 05:09:44.70018583 +0000 UTC m=+0.192338689 container start 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:09:44 np0005539482 podman[91047]: 2025-11-29 05:09:44.759326499 +0000 UTC m=+0.251479368 container attach 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 00:09:44 np0005539482 podman[90871]: 2025-11-29 05:09:44.789257437 +0000 UTC m=+1.313028497 container remove b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:09:44 np0005539482 systemd[1]: libpod-conmon-b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865.scope: Deactivated successfully.
Nov 29 00:09:45 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:45 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:45 np0005539482 python3[91107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554] boot
Nov 29 00:09:45 np0005539482 podman[91108]: 2025-11-29 05:09:45.311790025 +0000 UTC m=+0.043275533 container create 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:45 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:45 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:45 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] creating mgr pool
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 00:09:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 00:09:45 np0005539482 systemd[1]: Started libpod-conmon-55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a.scope.
Nov 29 00:09:45 np0005539482 podman[91108]: 2025-11-29 05:09:45.293345427 +0000 UTC m=+0.024830955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7598c0f4f42a120de85fc25c2eec914c851943d7550ae09de03f9a8fc4372d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7598c0f4f42a120de85fc25c2eec914c851943d7550ae09de03f9a8fc4372d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:45 np0005539482 podman[91108]: 2025-11-29 05:09:45.41763852 +0000 UTC m=+0.149124048 container init 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:45 np0005539482 podman[91108]: 2025-11-29 05:09:45.423722798 +0000 UTC m=+0.155208306 container start 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:45 np0005539482 podman[91108]: 2025-11-29 05:09:45.439654415 +0000 UTC m=+0.171139923 container attach 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 00:09:45 np0005539482 bash[91047]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 00:09:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 00:09:45 np0005539482 bash[91047]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 00:09:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 00:09:45 np0005539482 bash[91047]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 00:09:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 00:09:45 np0005539482 bash[91047]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 00:09:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:45 np0005539482 bash[91047]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 00:09:45 np0005539482 bash[91047]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 00:09:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 00:09:45 np0005539482 bash[91047]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 00:09:45 np0005539482 systemd[1]: libpod-155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5.scope: Deactivated successfully.
Nov 29 00:09:45 np0005539482 systemd[1]: libpod-155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5.scope: Consumed 1.030s CPU time.
Nov 29 00:09:45 np0005539482 podman[91242]: 2025-11-29 05:09:45.780639649 +0000 UTC m=+0.040383353 container died 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:45 np0005539482 systemd[1]: var-lib-containers-storage-overlay-85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56-merged.mount: Deactivated successfully.
Nov 29 00:09:45 np0005539482 podman[91242]: 2025-11-29 05:09:45.909916163 +0000 UTC m=+0.169659787 container remove 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:46 np0005539482 podman[91323]: 2025-11-29 05:09:46.131738009 +0000 UTC m=+0.058078644 container create 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:09:46 np0005539482 podman[91323]: 2025-11-29 05:09:46.098900439 +0000 UTC m=+0.025241104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:46 np0005539482 podman[91323]: 2025-11-29 05:09:46.24649095 +0000 UTC m=+0.172831605 container init 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:09:46 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:46 np0005539482 podman[91323]: 2025-11-29 05:09:46.262942409 +0000 UTC m=+0.189283044 container start 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:46 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:46 np0005539482 bash[91323]: 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573
Nov 29 00:09:46 np0005539482 systemd[1]: Started Ceph osd.2 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: pidfile_write: ignore empty --pid-file
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 29 00:09:46 np0005539482 happy_moore[91131]: pool 'vms' created
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 29 00:09:46 np0005539482 ceph-osd[89151]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 00:09:46 np0005539482 ceph-osd[89151]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 00:09:46 np0005539482 ceph-osd[89151]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 00:09:46 np0005539482 systemd[1]: libpod-55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a.scope: Deactivated successfully.
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:46 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 00:09:46 np0005539482 podman[91108]: 2025-11-29 05:09:46.405937038 +0000 UTC m=+1.137422566 container died 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 00:09:46 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:46 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:46 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5e7598c0f4f42a120de85fc25c2eec914c851943d7550ae09de03f9a8fc4372d-merged.mount: Deactivated successfully.
Nov 29 00:09:46 np0005539482 podman[91108]: 2025-11-29 05:09:46.538078572 +0000 UTC m=+1.269564110 container remove 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:09:46 np0005539482 systemd[1]: libpod-conmon-55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a.scope: Deactivated successfully.
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 29 00:09:46 np0005539482 python3[91494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: load: jerasure load: lrc 
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:46 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 00:09:46 np0005539482 podman[91503]: 2025-11-29 05:09:46.962703129 +0000 UTC m=+0.051946834 container create 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:09:47 np0005539482 systemd[1]: Started libpod-conmon-72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd.scope.
Nov 29 00:09:47 np0005539482 podman[91503]: 2025-11-29 05:09:46.940284054 +0000 UTC m=+0.029527769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da229cbfedf1daaedab80c22b8f90dcec95f8eb700a15b7ac0e9fd06b2bc16ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da229cbfedf1daaedab80c22b8f90dcec95f8eb700a15b7ac0e9fd06b2bc16ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:47 np0005539482 podman[91503]: 2025-11-29 05:09:47.105492312 +0000 UTC m=+0.194736097 container init 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:47 np0005539482 podman[91503]: 2025-11-29 05:09:47.115662139 +0000 UTC m=+0.204905834 container start 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:09:47 np0005539482 podman[91503]: 2025-11-29 05:09:47.133074243 +0000 UTC m=+0.222318018 container attach 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 00:09:47 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:47 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:47 np0005539482 podman[91563]: 2025-11-29 05:09:47.264020297 +0000 UTC m=+0.046831319 container create e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:09:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v36: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 00:09:47 np0005539482 systemd[1]: Started libpod-conmon-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope.
Nov 29 00:09:47 np0005539482 podman[91563]: 2025-11-29 05:09:47.246227925 +0000 UTC m=+0.029038977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:47 np0005539482 podman[91563]: 2025-11-29 05:09:47.409549477 +0000 UTC m=+0.192360519 container init e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:09:47 np0005539482 podman[91563]: 2025-11-29 05:09:47.421537669 +0000 UTC m=+0.204348741 container start e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:47 np0005539482 recursing_panini[91580]: 167 167
Nov 29 00:09:47 np0005539482 systemd[1]: libpod-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope: Deactivated successfully.
Nov 29 00:09:47 np0005539482 conmon[91580]: conmon e066c8afb37c27a5ddc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope/container/memory.events
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 29 00:09:47 np0005539482 podman[91563]: 2025-11-29 05:09:47.450924424 +0000 UTC m=+0.233735466 container attach e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:47 np0005539482 podman[91563]: 2025-11-29 05:09:47.451908838 +0000 UTC m=+0.234719870 container died e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs mount
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs mount shared_bdev_used = 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Git sha 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: DB SUMMARY
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: DB Session ID:  6JZI3E9CISG6DWQI9SRA
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                     Options.env: 0x5577629e7d50
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                Options.info_log: 0x557761bde800
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.write_buffer_manager: 0x557762af8460
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.row_cache: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                              Options.wal_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.wal_compression: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_background_jobs: 4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Compression algorithms supported:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kZSTD supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:47 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:47 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 58ce6855-c8a7-4728-93f6-6b17cab7a3d9
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987506293, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987506697, "job": 1, "event": "recovery_finished"}
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: freelist init
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: freelist _read_cfg
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs umount
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 00:09:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-2a0b31037095fa5f82a093b26ada942a9c2184aa4dc8c737844f1d46b5dc590e-merged.mount: Deactivated successfully.
Nov 29 00:09:47 np0005539482 podman[91563]: 2025-11-29 05:09:47.583504559 +0000 UTC m=+0.366315621 container remove e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 00:09:47 np0005539482 systemd[1]: libpod-conmon-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope: Deactivated successfully.
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs mount
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluefs mount shared_bdev_used = 4718592
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: RocksDB version: 7.9.2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Git sha 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: DB SUMMARY
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: DB Session ID:  6JZI3E9CISG6DWQI9SRB
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: CURRENT file:  CURRENT
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.error_if_exists: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.create_if_missing: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                     Options.env: 0x557762ba8460
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                Options.info_log: 0x557761bdf200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                              Options.statistics: (nil)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.use_fsync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                              Options.db_log_dir: 
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.write_buffer_manager: 0x557762af8460
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.unordered_write: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.row_cache: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                              Options.wal_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.two_write_queues: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.wal_compression: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.atomic_flush: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_background_jobs: 4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_background_compactions: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_subcompactions: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.max_open_files: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Compression algorithms supported:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kZSTD supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kXpressCompression supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kZlibCompression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdef60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdef60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdef60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557761bc6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 58ce6855-c8a7-4728-93f6-6b17cab7a3d9
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987758588, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987763688, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392987, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58ce6855-c8a7-4728-93f6-6b17cab7a3d9", "db_session_id": "6JZI3E9CISG6DWQI9SRB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987766409, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392987, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58ce6855-c8a7-4728-93f6-6b17cab7a3d9", "db_session_id": "6JZI3E9CISG6DWQI9SRB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987773044, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392987, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58ce6855-c8a7-4728-93f6-6b17cab7a3d9", "db_session_id": "6JZI3E9CISG6DWQI9SRB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987775805, "job": 1, "event": "recovery_finished"}
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 00:09:47 np0005539482 podman[91820]: 2025-11-29 05:09:47.788213637 +0000 UTC m=+0.071411247 container create ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557762bb4000
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: DB pointer 0x557762ae9a00
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usag
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: _get_class not permitted to load lua
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: _get_class not permitted to load sdk
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: _get_class not permitted to load test_remote_reads
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2 0 load_pgs
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2 0 load_pgs opened 0 pgs
Nov 29 00:09:47 np0005539482 ceph-osd[91343]: osd.2 0 log_to_monitors true
Nov 29 00:09:47 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2[91339]: 2025-11-29T05:09:47.837+0000 7f9a1eee4740 -1 osd.2 0 log_to_monitors true
Nov 29 00:09:47 np0005539482 systemd[1]: Started libpod-conmon-ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28.scope.
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 00:09:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 00:09:47 np0005539482 podman[91820]: 2025-11-29 05:09:47.762882241 +0000 UTC m=+0.046079901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:47 np0005539482 podman[91820]: 2025-11-29 05:09:47.888913137 +0000 UTC m=+0.172110747 container init ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:09:47 np0005539482 podman[91820]: 2025-11-29 05:09:47.898207692 +0000 UTC m=+0.181405302 container start ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:47 np0005539482 podman[91820]: 2025-11-29 05:09:47.902004145 +0000 UTC m=+0.185201785 container attach ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.826 iops: 7123.470 elapsed_sec: 0.421
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: log_channel(cluster) log [WRN] : OSD bench result of 7123.469535 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 0 waiting for initial osdmap
Nov 29 00:09:48 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1[90177]: 2025-11-29T05:09:48.194+0000 7f1f6d487640 -1 osd.1 0 waiting for initial osdmap
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 13 set_numa_affinity not setting numa affinity
Nov 29 00:09:48 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1[90177]: 2025-11-29T05:09:48.215+0000 7f1f68aaf640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 29 00:09:48 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:48 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 29 00:09:48 np0005539482 great_sutherland[91541]: pool 'volumes' created
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376] boot
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:48 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 14 state: booting -> active
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:48 np0005539482 systemd[1]: libpod-72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd.scope: Deactivated successfully.
Nov 29 00:09:48 np0005539482 podman[91503]: 2025-11-29 05:09:48.517975707 +0000 UTC m=+1.607219392 container died 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:09:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-da229cbfedf1daaedab80c22b8f90dcec95f8eb700a15b7ac0e9fd06b2bc16ad-merged.mount: Deactivated successfully.
Nov 29 00:09:48 np0005539482 podman[91503]: 2025-11-29 05:09:48.561142947 +0000 UTC m=+1.650386642 container remove 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 00:09:48 np0005539482 systemd[1]: libpod-conmon-72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd.scope: Deactivated successfully.
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]: {
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "osd_id": 0,
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "type": "bluestore"
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:    },
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "osd_id": 1,
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "type": "bluestore"
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:    },
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "osd_id": 2,
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:        "type": "bluestore"
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]:    }
Nov 29 00:09:48 np0005539482 jolly_sinoussi[92055]: }
Nov 29 00:09:48 np0005539482 systemd[1]: libpod-ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28.scope: Deactivated successfully.
Nov 29 00:09:48 np0005539482 podman[91820]: 2025-11-29 05:09:48.786402156 +0000 UTC m=+1.069599766 container died ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5-merged.mount: Deactivated successfully.
Nov 29 00:09:48 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 00:09:48 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 00:09:48 np0005539482 podman[91820]: 2025-11-29 05:09:48.85035545 +0000 UTC m=+1.133553060 container remove ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:09:48 np0005539482 systemd[1]: libpod-conmon-ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28.scope: Deactivated successfully.
Nov 29 00:09:48 np0005539482 python3[92123]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:48 np0005539482 podman[92145]: 2025-11-29 05:09:48.923065149 +0000 UTC m=+0.041419558 container create 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 00:09:48 np0005539482 systemd[1]: Started libpod-conmon-761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c.scope.
Nov 29 00:09:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f978630d7e3fd0ada1631ce5ddcbd64f9b969d9bbaa8e00c3abef90fb3aa7df8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f978630d7e3fd0ada1631ce5ddcbd64f9b969d9bbaa8e00c3abef90fb3aa7df8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:48 np0005539482 podman[92145]: 2025-11-29 05:09:48.992001806 +0000 UTC m=+0.110356245 container init 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:09:48 np0005539482 podman[92145]: 2025-11-29 05:09:48.900622553 +0000 UTC m=+0.018976992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:48 np0005539482 podman[92145]: 2025-11-29 05:09:48.998704749 +0000 UTC m=+0.117059158 container start 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:49 np0005539482 podman[92145]: 2025-11-29 05:09:49.001618459 +0000 UTC m=+0.119972868 container attach 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v39: 3 pgs: 2 creating+peering, 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 29 00:09:49 np0005539482 ceph-osd[91343]: osd.2 0 done with init, starting boot process
Nov 29 00:09:49 np0005539482 ceph-osd[91343]: osd.2 0 start_boot
Nov 29 00:09:49 np0005539482 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 00:09:49 np0005539482 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 00:09:49 np0005539482 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 00:09:49 np0005539482 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 00:09:49 np0005539482 ceph-osd[91343]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:49 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: OSD bench result of 7123.469535 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376] boot
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:49 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 00:09:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:49 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:49 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 00:09:49 np0005539482 podman[92411]: 2025-11-29 05:09:49.736001672 +0000 UTC m=+0.105483367 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:09:49 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 00:09:49 np0005539482 ceph-mgr[75473]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 00:09:49 np0005539482 podman[92411]: 2025-11-29 05:09:49.858841 +0000 UTC m=+0.228322725 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 00:09:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:50 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:50 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 29 00:09:50 np0005539482 clever_lalande[92184]: pool 'backups' created
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 29 00:09:50 np0005539482 podman[92145]: 2025-11-29 05:09:50.546175277 +0000 UTC m=+1.664529676 container died 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:50 np0005539482 systemd[1]: libpod-761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c.scope: Deactivated successfully.
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:50 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:50 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f978630d7e3fd0ada1631ce5ddcbd64f9b969d9bbaa8e00c3abef90fb3aa7df8-merged.mount: Deactivated successfully.
Nov 29 00:09:50 np0005539482 podman[92145]: 2025-11-29 05:09:50.650599776 +0000 UTC m=+1.768954185 container remove 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:09:50 np0005539482 systemd[1]: libpod-conmon-761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c.scope: Deactivated successfully.
Nov 29 00:09:50 np0005539482 python3[92679]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:50 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:50 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=15 pruub=12.508188248s) [] r=-1 lpr=15 pi=[12,15)/1 crt=0'0 mlcod 0'0 active pruub 25.552719116s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:09:50 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=15 pruub=12.508188248s) [] r=-1 lpr=15 pi=[12,15)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.552719116s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:09:51 np0005539482 podman[92706]: 2025-11-29 05:09:51.054534141 +0000 UTC m=+0.077077345 container create 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:51 np0005539482 podman[92706]: 2025-11-29 05:09:51.015961823 +0000 UTC m=+0.038504947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:51 np0005539482 podman[92731]: 2025-11-29 05:09:51.116596621 +0000 UTC m=+0.070864375 container create 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:51 np0005539482 systemd[1]: Started libpod-conmon-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope.
Nov 29 00:09:51 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63e12078ba9fd8577ff60397d62bf4e9eb6f1c9396ea21236e271c3f199ff62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63e12078ba9fd8577ff60397d62bf4e9eb6f1c9396ea21236e271c3f199ff62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:51 np0005539482 podman[92706]: 2025-11-29 05:09:51.162974109 +0000 UTC m=+0.185517273 container init 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:09:51 np0005539482 podman[92706]: 2025-11-29 05:09:51.172169663 +0000 UTC m=+0.194712767 container start 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:09:51 np0005539482 systemd[1]: Started libpod-conmon-8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe.scope.
Nov 29 00:09:51 np0005539482 podman[92731]: 2025-11-29 05:09:51.09642045 +0000 UTC m=+0.050688194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:51 np0005539482 podman[92706]: 2025-11-29 05:09:51.194206478 +0000 UTC m=+0.216749602 container attach 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:09:51 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:51 np0005539482 podman[92731]: 2025-11-29 05:09:51.233977926 +0000 UTC m=+0.188245660 container init 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:51 np0005539482 podman[92731]: 2025-11-29 05:09:51.245052325 +0000 UTC m=+0.199320039 container start 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:51 np0005539482 angry_mendeleev[92753]: 167 167
Nov 29 00:09:51 np0005539482 systemd[1]: libpod-8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe.scope: Deactivated successfully.
Nov 29 00:09:51 np0005539482 podman[92731]: 2025-11-29 05:09:51.264098548 +0000 UTC m=+0.218366272 container attach 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:51 np0005539482 podman[92731]: 2025-11-29 05:09:51.265454781 +0000 UTC m=+0.219722545 container died 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v42: 4 pgs: 1 unknown, 1 active+clean, 2 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 00:09:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b6cd962b68267d1b418844a975e45ce825dd080630246223f04eee294dff3318-merged.mount: Deactivated successfully.
Nov 29 00:09:51 np0005539482 podman[92731]: 2025-11-29 05:09:51.369996374 +0000 UTC m=+0.324264088 container remove 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:09:51 np0005539482 systemd[1]: libpod-conmon-8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe.scope: Deactivated successfully.
Nov 29 00:09:51 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:51 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:51 np0005539482 podman[92778]: 2025-11-29 05:09:51.566008641 +0000 UTC m=+0.064804177 container create dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.csskcz(active, since 70s)
Nov 29 00:09:51 np0005539482 podman[92778]: 2025-11-29 05:09:51.527995327 +0000 UTC m=+0.026790823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:51 np0005539482 systemd[1]: Started libpod-conmon-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope.
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:51 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:51 np0005539482 podman[92778]: 2025-11-29 05:09:51.685522898 +0000 UTC m=+0.184318404 container init dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:51 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:51 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:51 np0005539482 podman[92778]: 2025-11-29 05:09:51.698287988 +0000 UTC m=+0.197083484 container start dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:51 np0005539482 podman[92778]: 2025-11-29 05:09:51.715128578 +0000 UTC m=+0.213924074 container attach dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 00:09:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:52 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:52 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 29 00:09:52 np0005539482 sharp_lovelace[92747]: pool 'images' created
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:52 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:52 np0005539482 systemd[1]: libpod-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope: Deactivated successfully.
Nov 29 00:09:52 np0005539482 conmon[92747]: conmon 618dc0b63e9825818ac7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope/container/memory.events
Nov 29 00:09:52 np0005539482 podman[92706]: 2025-11-29 05:09:52.706010848 +0000 UTC m=+1.728553992 container died 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:52 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a63e12078ba9fd8577ff60397d62bf4e9eb6f1c9396ea21236e271c3f199ff62-merged.mount: Deactivated successfully.
Nov 29 00:09:52 np0005539482 podman[92706]: 2025-11-29 05:09:52.866561674 +0000 UTC m=+1.889104778 container remove 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:09:52 np0005539482 systemd[1]: libpod-conmon-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope: Deactivated successfully.
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]: [
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:    {
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "available": false,
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "ceph_device": false,
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "lsm_data": {},
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "lvs": [],
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "path": "/dev/sr0",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "rejected_reasons": [
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "Has a FileSystem",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "Insufficient space (<5GB)"
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        ],
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        "sys_api": {
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "actuators": null,
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "device_nodes": "sr0",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "devname": "sr0",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "human_readable_size": "482.00 KB",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "id_bus": "ata",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "model": "QEMU DVD-ROM",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "nr_requests": "2",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "parent": "/dev/sr0",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "partitions": {},
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "path": "/dev/sr0",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "removable": "1",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "rev": "2.5+",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "ro": "0",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "rotational": "1",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "sas_address": "",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "sas_device_handle": "",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "scheduler_mode": "mq-deadline",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "sectors": 0,
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "sectorsize": "2048",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "size": 493568.0,
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "support_discard": "2048",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "type": "disk",
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:            "vendor": "QEMU"
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:        }
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]:    }
Nov 29 00:09:53 np0005539482 distracted_mccarthy[92814]: ]
Nov 29 00:09:53 np0005539482 systemd[1]: libpod-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope: Deactivated successfully.
Nov 29 00:09:53 np0005539482 systemd[1]: libpod-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope: Consumed 1.370s CPU time.
Nov 29 00:09:53 np0005539482 conmon[92814]: conmon dbfc17c47abf9d2d84eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope/container/memory.events
Nov 29 00:09:53 np0005539482 podman[92778]: 2025-11-29 05:09:53.056106733 +0000 UTC m=+1.554902249 container died dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 00:09:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308-merged.mount: Deactivated successfully.
Nov 29 00:09:53 np0005539482 podman[92778]: 2025-11-29 05:09:53.143545801 +0000 UTC m=+1.642341287 container remove dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 29 00:09:53 np0005539482 systemd[1]: libpod-conmon-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope: Deactivated successfully.
Nov 29 00:09:53 np0005539482 python3[94487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 27cd2587-bfa3-40be-a705-31cc158fd97c does not exist
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 58fdda4e-3ead-4ba1-a09e-92ba641fc131 does not exist
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 3bd75c38-28e3-414f-997f-544c1302689f does not exist
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:09:53 np0005539482 podman[94502]: 2025-11-29 05:09:53.252153622 +0000 UTC m=+0.065925255 container create b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v45: 5 pgs: 2 unknown, 1 active+clean, 2 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 00:09:53 np0005539482 systemd[1]: Started libpod-conmon-b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd.scope.
Nov 29 00:09:53 np0005539482 podman[94502]: 2025-11-29 05:09:53.221207909 +0000 UTC m=+0.034979562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:53 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35bef553a812b3d74e3b75ecbc5c70c8ad9226411aa2789731e5d1c723c53a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35bef553a812b3d74e3b75ecbc5c70c8ad9226411aa2789731e5d1c723c53a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:53 np0005539482 podman[94502]: 2025-11-29 05:09:53.356678454 +0000 UTC m=+0.170450107 container init b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:53 np0005539482 podman[94502]: 2025-11-29 05:09:53.366631376 +0000 UTC m=+0.180403259 container start b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:53 np0005539482 podman[94502]: 2025-11-29 05:09:53.377929981 +0000 UTC m=+0.191701614 container attach b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.909 iops: 6632.612 elapsed_sec: 0.452
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: log_channel(cluster) log [WRN] : OSD bench result of 6632.611728 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 0 waiting for initial osdmap
Nov 29 00:09:53 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2[91339]: 2025-11-29T05:09:53.391+0000 7f9a1b67b640 -1 osd.2 0 waiting for initial osdmap
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 18 check_osdmap_features require_osd_release unknown -> reef
Nov 29 00:09:53 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2[91339]: 2025-11-29T05:09:53.425+0000 7f9a1648c640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 18 set_numa_affinity not setting numa affinity
Nov 29 00:09:53 np0005539482 ceph-osd[91343]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:09:53 np0005539482 podman[94682]: 2025-11-29 05:09:53.861291327 +0000 UTC m=+0.050568311 container create c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:53 np0005539482 systemd[1]: Started libpod-conmon-c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205.scope.
Nov 29 00:09:53 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:53 np0005539482 podman[94682]: 2025-11-29 05:09:53.834973457 +0000 UTC m=+0.024250431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 00:09:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:53 np0005539482 podman[94682]: 2025-11-29 05:09:53.951148502 +0000 UTC m=+0.140425496 container init c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:53 np0005539482 podman[94682]: 2025-11-29 05:09:53.957226961 +0000 UTC m=+0.146503915 container start c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:09:53 np0005539482 podman[94682]: 2025-11-29 05:09:53.960880229 +0000 UTC m=+0.150157223 container attach c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:53 np0005539482 zealous_merkle[94698]: 167 167
Nov 29 00:09:53 np0005539482 systemd[1]: libpod-c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205.scope: Deactivated successfully.
Nov 29 00:09:53 np0005539482 podman[94682]: 2025-11-29 05:09:53.963165255 +0000 UTC m=+0.152442199 container died c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:09:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7ba5f51c46fc527eee6b6978a0409f2d0fea14632b2f7f93715a3b59dd0c788f-merged.mount: Deactivated successfully.
Nov 29 00:09:54 np0005539482 podman[94682]: 2025-11-29 05:09:54.001357184 +0000 UTC m=+0.190634148 container remove c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:54 np0005539482 systemd[1]: libpod-conmon-c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205.scope: Deactivated successfully.
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:54 np0005539482 podman[94726]: 2025-11-29 05:09:54.178973184 +0000 UTC m=+0.048112071 container create 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 00:09:54 np0005539482 systemd[1]: Started libpod-conmon-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope.
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 00:09:54 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:54 np0005539482 podman[94726]: 2025-11-29 05:09:54.155225336 +0000 UTC m=+0.024364223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254] boot
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 00:09:54 np0005539482 brave_haslett[94540]: pool 'cephfs.cephfs.meta' created
Nov 29 00:09:54 np0005539482 ceph-osd[91343]: osd.2 19 state: booting -> active
Nov 29 00:09:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:54 np0005539482 podman[94502]: 2025-11-29 05:09:54.26844756 +0000 UTC m=+1.082219203 container died b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:09:54 np0005539482 podman[94726]: 2025-11-29 05:09:54.274059617 +0000 UTC m=+0.143198504 container init 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:54 np0005539482 systemd[1]: libpod-b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd.scope: Deactivated successfully.
Nov 29 00:09:54 np0005539482 podman[94726]: 2025-11-29 05:09:54.281496847 +0000 UTC m=+0.150635714 container start 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:54 np0005539482 podman[94726]: 2025-11-29 05:09:54.286007468 +0000 UTC m=+0.155146335 container attach 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:54 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b35bef553a812b3d74e3b75ecbc5c70c8ad9226411aa2789731e5d1c723c53a0-merged.mount: Deactivated successfully.
Nov 29 00:09:54 np0005539482 podman[94502]: 2025-11-29 05:09:54.321667025 +0000 UTC m=+1.135438658 container remove b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:09:54 np0005539482 systemd[1]: libpod-conmon-b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd.scope: Deactivated successfully.
Nov 29 00:09:54 np0005539482 python3[94785]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:54 np0005539482 podman[94786]: 2025-11-29 05:09:54.642082048 +0000 UTC m=+0.041742197 container create f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:09:54 np0005539482 systemd[1]: Started libpod-conmon-f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020.scope.
Nov 29 00:09:54 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d11d71c6f8b237e799414f93a28cb82152221cf07a7e4cd78b9e14aecf74d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:54 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d11d71c6f8b237e799414f93a28cb82152221cf07a7e4cd78b9e14aecf74d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: OSD bench result of 6632.611728 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:54 np0005539482 ceph-mon[75176]: osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254] boot
Nov 29 00:09:54 np0005539482 podman[94786]: 2025-11-29 05:09:54.707922169 +0000 UTC m=+0.107582358 container init f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:54 np0005539482 podman[94786]: 2025-11-29 05:09:54.713653479 +0000 UTC m=+0.113313628 container start f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:09:54 np0005539482 podman[94786]: 2025-11-29 05:09:54.717236116 +0000 UTC m=+0.116896305 container attach f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:54 np0005539482 podman[94786]: 2025-11-29 05:09:54.622581094 +0000 UTC m=+0.022241303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 19 pg[6.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 19 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19 pruub=8.281105995s) [2] r=-1 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.552719116s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:09:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 19 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19 pruub=8.280880928s) [2] r=-1 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.552719116s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:09:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19) [2] r=0 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 00:09:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 29 00:09:55 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 29 00:09:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19) [2] r=0 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v48: 6 pgs: 1 creating+peering, 1 unknown, 4 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 29 00:09:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 00:09:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:55 np0005539482 frosty_yonath[94742]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:09:55 np0005539482 frosty_yonath[94742]: --> relative data size: 1.0
Nov 29 00:09:55 np0005539482 frosty_yonath[94742]: --> All data devices are unavailable
Nov 29 00:09:55 np0005539482 systemd[1]: libpod-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope: Deactivated successfully.
Nov 29 00:09:55 np0005539482 systemd[1]: libpod-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope: Consumed 1.019s CPU time.
Nov 29 00:09:55 np0005539482 podman[94726]: 2025-11-29 05:09:55.536310208 +0000 UTC m=+1.405449115 container died 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:55 np0005539482 systemd[1]: var-lib-containers-storage-overlay-410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21-merged.mount: Deactivated successfully.
Nov 29 00:09:55 np0005539482 podman[94726]: 2025-11-29 05:09:55.592928645 +0000 UTC m=+1.462067492 container remove 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:09:55 np0005539482 systemd[1]: libpod-conmon-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope: Deactivated successfully.
Nov 29 00:09:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 00:09:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 29 00:09:56 np0005539482 hungry_leakey[94802]: pool 'cephfs.cephfs.data' created
Nov 29 00:09:56 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 29 00:09:56 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:09:56 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 00:09:56 np0005539482 systemd[1]: libpod-f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020.scope: Deactivated successfully.
Nov 29 00:09:56 np0005539482 podman[94786]: 2025-11-29 05:09:56.275600379 +0000 UTC m=+1.675260528 container died f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:09:56 np0005539482 podman[95006]: 2025-11-29 05:09:56.297565993 +0000 UTC m=+0.050402907 container create eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:09:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d09d11d71c6f8b237e799414f93a28cb82152221cf07a7e4cd78b9e14aecf74d-merged.mount: Deactivated successfully.
Nov 29 00:09:56 np0005539482 podman[95006]: 2025-11-29 05:09:56.275050425 +0000 UTC m=+0.027887389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:56 np0005539482 systemd[1]: Started libpod-conmon-eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba.scope.
Nov 29 00:09:56 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:56 np0005539482 podman[94786]: 2025-11-29 05:09:56.485839262 +0000 UTC m=+1.885499421 container remove f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:09:56 np0005539482 podman[95006]: 2025-11-29 05:09:56.667305515 +0000 UTC m=+0.420142449 container init eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:56 np0005539482 podman[95006]: 2025-11-29 05:09:56.67530048 +0000 UTC m=+0.428137414 container start eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:56 np0005539482 podman[95006]: 2025-11-29 05:09:56.67900032 +0000 UTC m=+0.431837274 container attach eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:56 np0005539482 nice_shtern[95034]: 167 167
Nov 29 00:09:56 np0005539482 systemd[1]: libpod-eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba.scope: Deactivated successfully.
Nov 29 00:09:56 np0005539482 podman[95006]: 2025-11-29 05:09:56.681814699 +0000 UTC m=+0.434651613 container died eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-2ad2354e53d1422d8d5108d3d471a095c17a9914d7d6e1c6237393c553fc4f0e-merged.mount: Deactivated successfully.
Nov 29 00:09:56 np0005539482 podman[95006]: 2025-11-29 05:09:56.727131011 +0000 UTC m=+0.479967935 container remove eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:09:56 np0005539482 systemd[1]: libpod-conmon-eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba.scope: Deactivated successfully.
Nov 29 00:09:56 np0005539482 systemd[1]: libpod-conmon-f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020.scope: Deactivated successfully.
Nov 29 00:09:56 np0005539482 python3[95066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:56 np0005539482 podman[95085]: 2025-11-29 05:09:56.948230488 +0000 UTC m=+0.105648750 container create f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:56 np0005539482 podman[95085]: 2025-11-29 05:09:56.863820185 +0000 UTC m=+0.021238427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:56 np0005539482 podman[95086]: 2025-11-29 05:09:56.87796873 +0000 UTC m=+0.028665769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:56 np0005539482 podman[95086]: 2025-11-29 05:09:56.977533071 +0000 UTC m=+0.128230150 container create 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:09:56 np0005539482 systemd[1]: Started libpod-conmon-f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7.scope.
Nov 29 00:09:57 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9495060c104269bb19794effad620f10e6c708603f60af7947206c7b701a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9495060c104269bb19794effad620f10e6c708603f60af7947206c7b701a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:57 np0005539482 systemd[1]: Started libpod-conmon-57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb.scope.
Nov 29 00:09:57 np0005539482 podman[95085]: 2025-11-29 05:09:57.027015924 +0000 UTC m=+0.184434196 container init f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:09:57 np0005539482 podman[95085]: 2025-11-29 05:09:57.036988747 +0000 UTC m=+0.194406969 container start f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:09:57 np0005539482 podman[95085]: 2025-11-29 05:09:57.040683747 +0000 UTC m=+0.198101979 container attach f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:57 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:57 np0005539482 podman[95086]: 2025-11-29 05:09:57.067358786 +0000 UTC m=+0.218055825 container init 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:57 np0005539482 podman[95086]: 2025-11-29 05:09:57.074498599 +0000 UTC m=+0.225195648 container start 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:09:57 np0005539482 podman[95086]: 2025-11-29 05:09:57.078947627 +0000 UTC m=+0.229644686 container attach 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 00:09:57 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 00:09:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v50: 7 pgs: 1 creating+peering, 2 unknown, 4 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 29 00:09:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 29 00:09:57 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 29 00:09:57 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:09:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 00:09:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]: {
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:    "0": [
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:        {
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "devices": [
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "/dev/loop3"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            ],
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_name": "ceph_lv0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_size": "21470642176",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "name": "ceph_lv0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "tags": {
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cluster_name": "ceph",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.crush_device_class": "",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.encrypted": "0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osd_id": "0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.type": "block",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.vdo": "0"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            },
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "type": "block",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "vg_name": "ceph_vg0"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:        }
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:    ],
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:    "1": [
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:        {
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "devices": [
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "/dev/loop4"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            ],
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_name": "ceph_lv1",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_size": "21470642176",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "name": "ceph_lv1",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "tags": {
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cluster_name": "ceph",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.crush_device_class": "",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.encrypted": "0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osd_id": "1",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.type": "block",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.vdo": "0"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            },
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "type": "block",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "vg_name": "ceph_vg1"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:        }
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:    ],
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:    "2": [
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:        {
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "devices": [
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "/dev/loop5"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            ],
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_name": "ceph_lv2",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_size": "21470642176",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "name": "ceph_lv2",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "tags": {
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.cluster_name": "ceph",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.crush_device_class": "",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.encrypted": "0",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osd_id": "2",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.type": "block",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:                "ceph.vdo": "0"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            },
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "type": "block",
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:            "vg_name": "ceph_vg2"
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:        }
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]:    ]
Nov 29 00:09:57 np0005539482 objective_sutherland[95120]: }
Nov 29 00:09:57 np0005539482 systemd[1]: libpod-57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb.scope: Deactivated successfully.
Nov 29 00:09:57 np0005539482 podman[95086]: 2025-11-29 05:09:57.858185771 +0000 UTC m=+1.008882820 container died 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:09:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34-merged.mount: Deactivated successfully.
Nov 29 00:09:57 np0005539482 podman[95086]: 2025-11-29 05:09:57.919099882 +0000 UTC m=+1.069796921 container remove 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 00:09:57 np0005539482 systemd[1]: libpod-conmon-57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb.scope: Deactivated successfully.
Nov 29 00:09:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 00:09:58 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 00:09:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 00:09:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 29 00:09:58 np0005539482 awesome_engelbart[95115]: enabled application 'rbd' on pool 'vms'
Nov 29 00:09:58 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 29 00:09:58 np0005539482 systemd[1]: libpod-f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7.scope: Deactivated successfully.
Nov 29 00:09:58 np0005539482 podman[95085]: 2025-11-29 05:09:58.320811983 +0000 UTC m=+1.478230215 container died f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 00:09:58 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1ec9495060c104269bb19794effad620f10e6c708603f60af7947206c7b701a2-merged.mount: Deactivated successfully.
Nov 29 00:09:58 np0005539482 podman[95085]: 2025-11-29 05:09:58.370626675 +0000 UTC m=+1.528044897 container remove f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:09:58 np0005539482 systemd[1]: libpod-conmon-f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7.scope: Deactivated successfully.
Nov 29 00:09:58 np0005539482 python3[95313]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:09:58 np0005539482 podman[95342]: 2025-11-29 05:09:58.722534003 +0000 UTC m=+0.047560207 container create 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:09:58 np0005539482 podman[95340]: 2025-11-29 05:09:58.730920537 +0000 UTC m=+0.059813005 container create 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:09:58 np0005539482 systemd[1]: Started libpod-conmon-4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2.scope.
Nov 29 00:09:58 np0005539482 systemd[1]: Started libpod-conmon-77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f.scope.
Nov 29 00:09:58 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7d857d29652ae4cfb6185f34dd9b15ef58766e30800bf2db2a1715e61fb100/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7d857d29652ae4cfb6185f34dd9b15ef58766e30800bf2db2a1715e61fb100/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:58 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:58 np0005539482 podman[95342]: 2025-11-29 05:09:58.698470998 +0000 UTC m=+0.023497262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:09:58 np0005539482 podman[95340]: 2025-11-29 05:09:58.699456443 +0000 UTC m=+0.028348961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:58 np0005539482 podman[95342]: 2025-11-29 05:09:58.812724508 +0000 UTC m=+0.137750792 container init 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:09:58 np0005539482 podman[95340]: 2025-11-29 05:09:58.815494135 +0000 UTC m=+0.144386623 container init 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:09:58 np0005539482 podman[95340]: 2025-11-29 05:09:58.82228598 +0000 UTC m=+0.151178448 container start 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:58 np0005539482 podman[95342]: 2025-11-29 05:09:58.823703095 +0000 UTC m=+0.148729299 container start 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:09:58 np0005539482 lucid_carson[95372]: 167 167
Nov 29 00:09:58 np0005539482 podman[95340]: 2025-11-29 05:09:58.826675707 +0000 UTC m=+0.155568285 container attach 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:58 np0005539482 systemd[1]: libpod-77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f.scope: Deactivated successfully.
Nov 29 00:09:58 np0005539482 podman[95342]: 2025-11-29 05:09:58.831198977 +0000 UTC m=+0.156225231 container attach 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:09:58 np0005539482 podman[95340]: 2025-11-29 05:09:58.832854817 +0000 UTC m=+0.161747305 container died 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:09:58 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5c92ccc84ced699981d3e7426e5dde5c7fa4a77f8d8efd7c6ca2b14469db0d44-merged.mount: Deactivated successfully.
Nov 29 00:09:58 np0005539482 podman[95340]: 2025-11-29 05:09:58.880197939 +0000 UTC m=+0.209090417 container remove 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 29 00:09:58 np0005539482 systemd[1]: libpod-conmon-77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f.scope: Deactivated successfully.
Nov 29 00:09:59 np0005539482 podman[95401]: 2025-11-29 05:09:59.039668587 +0000 UTC m=+0.043502149 container create 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:09:59 np0005539482 systemd[1]: Started libpod-conmon-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope.
Nov 29 00:09:59 np0005539482 podman[95401]: 2025-11-29 05:09:59.019294411 +0000 UTC m=+0.023128033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:09:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:09:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:09:59 np0005539482 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:09:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:09:59 np0005539482 podman[95401]: 2025-11-29 05:09:59.172053867 +0000 UTC m=+0.175887449 container init 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:09:59 np0005539482 podman[95401]: 2025-11-29 05:09:59.185119925 +0000 UTC m=+0.188953497 container start 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:09:59 np0005539482 podman[95401]: 2025-11-29 05:09:59.187834991 +0000 UTC m=+0.191668563 container attach 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:09:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 1 creating+peering, 1 unknown, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:09:59 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 00:09:59 np0005539482 ceph-mon[75176]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:09:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 00:09:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 00:10:00 np0005539482 determined_bell[95419]: {
Nov 29 00:10:00 np0005539482 determined_bell[95419]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "osd_id": 0,
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "type": "bluestore"
Nov 29 00:10:00 np0005539482 determined_bell[95419]:    },
Nov 29 00:10:00 np0005539482 determined_bell[95419]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "osd_id": 1,
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "type": "bluestore"
Nov 29 00:10:00 np0005539482 determined_bell[95419]:    },
Nov 29 00:10:00 np0005539482 determined_bell[95419]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "osd_id": 2,
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:00 np0005539482 determined_bell[95419]:        "type": "bluestore"
Nov 29 00:10:00 np0005539482 determined_bell[95419]:    }
Nov 29 00:10:00 np0005539482 determined_bell[95419]: }
Nov 29 00:10:00 np0005539482 systemd[1]: libpod-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope: Deactivated successfully.
Nov 29 00:10:00 np0005539482 systemd[1]: libpod-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope: Consumed 1.099s CPU time.
Nov 29 00:10:00 np0005539482 podman[95401]: 2025-11-29 05:10:00.286690437 +0000 UTC m=+1.290524009 container died 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 29 00:10:00 np0005539482 jovial_herschel[95370]: enabled application 'rbd' on pool 'volumes'
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 29 00:10:00 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03-merged.mount: Deactivated successfully.
Nov 29 00:10:00 np0005539482 systemd[1]: libpod-4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2.scope: Deactivated successfully.
Nov 29 00:10:00 np0005539482 podman[95342]: 2025-11-29 05:10:00.343693424 +0000 UTC m=+1.668719668 container died 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:10:00 np0005539482 podman[95401]: 2025-11-29 05:10:00.359560079 +0000 UTC m=+1.363393651 container remove 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:10:00 np0005539482 systemd[1]: libpod-conmon-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope: Deactivated successfully.
Nov 29 00:10:00 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7b7d857d29652ae4cfb6185f34dd9b15ef58766e30800bf2db2a1715e61fb100-merged.mount: Deactivated successfully.
Nov 29 00:10:00 np0005539482 podman[95342]: 2025-11-29 05:10:00.39368212 +0000 UTC m=+1.718708324 container remove 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:10:00 np0005539482 systemd[1]: libpod-conmon-4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2.scope: Deactivated successfully.
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:00 np0005539482 python3[95574]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:00 np0005539482 podman[95623]: 2025-11-29 05:10:00.752113928 +0000 UTC m=+0.054562509 container create 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:00 np0005539482 systemd[1]: Started libpod-conmon-4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7.scope.
Nov 29 00:10:00 np0005539482 podman[95623]: 2025-11-29 05:10:00.730278746 +0000 UTC m=+0.032727347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:00 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:00 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142e63b434f75cf825904ff12c3da5bd101855bbb254ddb7202458bac744b0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:00 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142e63b434f75cf825904ff12c3da5bd101855bbb254ddb7202458bac744b0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:00 np0005539482 podman[95623]: 2025-11-29 05:10:00.85294334 +0000 UTC m=+0.155392011 container init 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:10:00 np0005539482 podman[95623]: 2025-11-29 05:10:00.86160518 +0000 UTC m=+0.164053771 container start 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 00:10:00 np0005539482 podman[95623]: 2025-11-29 05:10:00.865040054 +0000 UTC m=+0.167488645 container attach 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 00:10:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:01 np0005539482 podman[95784]: 2025-11-29 05:10:01.363317473 +0000 UTC m=+0.063777032 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 00:10:01 np0005539482 podman[95784]: 2025-11-29 05:10:01.476066076 +0000 UTC m=+0.176525575 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 29 00:10:02 np0005539482 sharp_hypatia[95663]: enabled application 'rbd' on pool 'backups'
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 29 00:10:02 np0005539482 systemd[1]: libpod-4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7.scope: Deactivated successfully.
Nov 29 00:10:02 np0005539482 podman[95623]: 2025-11-29 05:10:02.355080995 +0000 UTC m=+1.657529586 container died 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:10:02 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1142e63b434f75cf825904ff12c3da5bd101855bbb254ddb7202458bac744b0a-merged.mount: Deactivated successfully.
Nov 29 00:10:02 np0005539482 podman[95623]: 2025-11-29 05:10:02.395547 +0000 UTC m=+1.697995571 container remove 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:02 np0005539482 systemd[1]: libpod-conmon-4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7.scope: Deactivated successfully.
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:02 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7ddb7ed6-e687-41c2-bcff-7d9f67453acc does not exist
Nov 29 00:10:02 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6feaff37-d51b-4844-8d59-219664d05489 does not exist
Nov 29 00:10:02 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 02c5b774-29bf-41d9-ae52-20b9dbe37fad does not exist
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:02 np0005539482 python3[96063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:02 np0005539482 podman[96081]: 2025-11-29 05:10:02.734813231 +0000 UTC m=+0.036871128 container create bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:02 np0005539482 systemd[1]: Started libpod-conmon-bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1.scope.
Nov 29 00:10:02 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aefc9da802f903bdc738b8b62d6c966408cbbb5e84d36d0aa071330dbdf4196/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aefc9da802f903bdc738b8b62d6c966408cbbb5e84d36d0aa071330dbdf4196/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:02 np0005539482 podman[96081]: 2025-11-29 05:10:02.811478506 +0000 UTC m=+0.113536443 container init bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:10:02 np0005539482 podman[96081]: 2025-11-29 05:10:02.71832762 +0000 UTC m=+0.020385537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:02 np0005539482 podman[96081]: 2025-11-29 05:10:02.822821492 +0000 UTC m=+0.124879389 container start bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:10:02 np0005539482 podman[96081]: 2025-11-29 05:10:02.826631134 +0000 UTC m=+0.128689051 container attach bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:03 np0005539482 podman[96255]: 2025-11-29 05:10:03.205076939 +0000 UTC m=+0.037826041 container create 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:10:03 np0005539482 systemd[1]: Started libpod-conmon-4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60.scope.
Nov 29 00:10:03 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:03 np0005539482 podman[96255]: 2025-11-29 05:10:03.282298727 +0000 UTC m=+0.115047939 container init 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:10:03 np0005539482 podman[96255]: 2025-11-29 05:10:03.18745381 +0000 UTC m=+0.020202982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:03 np0005539482 podman[96255]: 2025-11-29 05:10:03.287311478 +0000 UTC m=+0.120060580 container start 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 00:10:03 np0005539482 podman[96255]: 2025-11-29 05:10:03.291609874 +0000 UTC m=+0.124359006 container attach 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:10:03 np0005539482 pedantic_knuth[96272]: 167 167
Nov 29 00:10:03 np0005539482 systemd[1]: libpod-4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60.scope: Deactivated successfully.
Nov 29 00:10:03 np0005539482 podman[96255]: 2025-11-29 05:10:03.294218686 +0000 UTC m=+0.126967788 container died 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:10:03 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b6b4bff7481e90a9bf674ee8a10c1207e13494f3ef9be803d5d41690bf409584-merged.mount: Deactivated successfully.
Nov 29 00:10:03 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 00:10:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 00:10:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 00:10:03 np0005539482 podman[96255]: 2025-11-29 05:10:03.348302952 +0000 UTC m=+0.181052044 container remove 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:10:03 np0005539482 systemd[1]: libpod-conmon-4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60.scope: Deactivated successfully.
Nov 29 00:10:03 np0005539482 podman[96297]: 2025-11-29 05:10:03.531491728 +0000 UTC m=+0.042879455 container create eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:03 np0005539482 systemd[1]: Started libpod-conmon-eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f.scope.
Nov 29 00:10:03 np0005539482 podman[96297]: 2025-11-29 05:10:03.509146614 +0000 UTC m=+0.020534321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:03 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:03 np0005539482 podman[96297]: 2025-11-29 05:10:03.624948971 +0000 UTC m=+0.136336698 container init eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 00:10:03 np0005539482 podman[96297]: 2025-11-29 05:10:03.632078924 +0000 UTC m=+0.143466641 container start eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:03 np0005539482 podman[96297]: 2025-11-29 05:10:03.635841475 +0000 UTC m=+0.147229202 container attach eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 29 00:10:04 np0005539482 loving_keller[96139]: enabled application 'rbd' on pool 'images'
Nov 29 00:10:04 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 29 00:10:04 np0005539482 systemd[1]: libpod-bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1.scope: Deactivated successfully.
Nov 29 00:10:04 np0005539482 podman[96081]: 2025-11-29 05:10:04.380794644 +0000 UTC m=+1.682852541 container died bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5aefc9da802f903bdc738b8b62d6c966408cbbb5e84d36d0aa071330dbdf4196-merged.mount: Deactivated successfully.
Nov 29 00:10:04 np0005539482 podman[96081]: 2025-11-29 05:10:04.421684429 +0000 UTC m=+1.723742326 container remove bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:10:04 np0005539482 systemd[1]: libpod-conmon-bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1.scope: Deactivated successfully.
Nov 29 00:10:04 np0005539482 hopeful_lamarr[96315]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:10:04 np0005539482 hopeful_lamarr[96315]: --> relative data size: 1.0
Nov 29 00:10:04 np0005539482 hopeful_lamarr[96315]: --> All data devices are unavailable
Nov 29 00:10:04 np0005539482 systemd[1]: libpod-eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f.scope: Deactivated successfully.
Nov 29 00:10:04 np0005539482 podman[96297]: 2025-11-29 05:10:04.715373022 +0000 UTC m=+1.226760819 container died eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:04 np0005539482 python3[96378]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871-merged.mount: Deactivated successfully.
Nov 29 00:10:04 np0005539482 podman[96297]: 2025-11-29 05:10:04.779130223 +0000 UTC m=+1.290517920 container remove eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:04 np0005539482 systemd[1]: libpod-conmon-eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f.scope: Deactivated successfully.
Nov 29 00:10:04 np0005539482 podman[96395]: 2025-11-29 05:10:04.806697844 +0000 UTC m=+0.044338140 container create 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:10:04 np0005539482 systemd[1]: Started libpod-conmon-111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0.scope.
Nov 29 00:10:04 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe45f81ee406cab442dfb89f68473e467e97349a517d0a9efa2b8fb03dcbd8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe45f81ee406cab442dfb89f68473e467e97349a517d0a9efa2b8fb03dcbd8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:04 np0005539482 podman[96395]: 2025-11-29 05:10:04.789015733 +0000 UTC m=+0.026656019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:04 np0005539482 podman[96395]: 2025-11-29 05:10:04.89992015 +0000 UTC m=+0.137560516 container init 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 00:10:04 np0005539482 podman[96395]: 2025-11-29 05:10:04.910197071 +0000 UTC m=+0.147837357 container start 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:04 np0005539482 podman[96395]: 2025-11-29 05:10:04.916038903 +0000 UTC m=+0.153679229 container attach 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:10:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:05 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 00:10:05 np0005539482 podman[96573]: 2025-11-29 05:10:05.416678119 +0000 UTC m=+0.064115230 container create 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:10:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 00:10:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 00:10:05 np0005539482 systemd[1]: Started libpod-conmon-91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848.scope.
Nov 29 00:10:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:05 np0005539482 podman[96573]: 2025-11-29 05:10:05.386037044 +0000 UTC m=+0.033474205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:05 np0005539482 podman[96573]: 2025-11-29 05:10:05.490615497 +0000 UTC m=+0.138052598 container init 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:05 np0005539482 podman[96573]: 2025-11-29 05:10:05.495983718 +0000 UTC m=+0.143420799 container start 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:05 np0005539482 podman[96573]: 2025-11-29 05:10:05.50017485 +0000 UTC m=+0.147611921 container attach 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:10:05 np0005539482 zen_brahmagupta[96591]: 167 167
Nov 29 00:10:05 np0005539482 systemd[1]: libpod-91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848.scope: Deactivated successfully.
Nov 29 00:10:05 np0005539482 podman[96573]: 2025-11-29 05:10:05.50262965 +0000 UTC m=+0.150066731 container died 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:05 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f0ed6a263fc44cccb35260a11430343ccc1d452f3e64efd59511ee678340cd4b-merged.mount: Deactivated successfully.
Nov 29 00:10:05 np0005539482 podman[96573]: 2025-11-29 05:10:05.541809003 +0000 UTC m=+0.189246084 container remove 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:10:05 np0005539482 systemd[1]: libpod-conmon-91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848.scope: Deactivated successfully.
Nov 29 00:10:05 np0005539482 podman[96614]: 2025-11-29 05:10:05.671491137 +0000 UTC m=+0.036888038 container create b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:10:05 np0005539482 systemd[1]: Started libpod-conmon-b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede.scope.
Nov 29 00:10:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:05 np0005539482 podman[96614]: 2025-11-29 05:10:05.654003152 +0000 UTC m=+0.019400073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:05 np0005539482 podman[96614]: 2025-11-29 05:10:05.757577641 +0000 UTC m=+0.122974582 container init b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:10:05 np0005539482 podman[96614]: 2025-11-29 05:10:05.767449441 +0000 UTC m=+0.132846332 container start b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:10:05 np0005539482 podman[96614]: 2025-11-29 05:10:05.771002257 +0000 UTC m=+0.136399238 container attach b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:10:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 00:10:06 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 00:10:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 00:10:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 29 00:10:06 np0005539482 gallant_wright[96434]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 00:10:06 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 29 00:10:06 np0005539482 systemd[1]: libpod-111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0.scope: Deactivated successfully.
Nov 29 00:10:06 np0005539482 podman[96395]: 2025-11-29 05:10:06.415936783 +0000 UTC m=+1.653577059 container died 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:06 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6fe45f81ee406cab442dfb89f68473e467e97349a517d0a9efa2b8fb03dcbd8e-merged.mount: Deactivated successfully.
Nov 29 00:10:06 np0005539482 podman[96395]: 2025-11-29 05:10:06.468640966 +0000 UTC m=+1.706281242 container remove 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:10:06 np0005539482 systemd[1]: libpod-conmon-111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0.scope: Deactivated successfully.
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]: {
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:    "0": [
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:        {
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "devices": [
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "/dev/loop3"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            ],
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_name": "ceph_lv0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_size": "21470642176",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "name": "ceph_lv0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "tags": {
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.crush_device_class": "",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.encrypted": "0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osd_id": "0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.type": "block",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.vdo": "0"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            },
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "type": "block",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "vg_name": "ceph_vg0"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:        }
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:    ],
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:    "1": [
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:        {
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "devices": [
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "/dev/loop4"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            ],
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_name": "ceph_lv1",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_size": "21470642176",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "name": "ceph_lv1",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "tags": {
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.crush_device_class": "",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.encrypted": "0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osd_id": "1",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.type": "block",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.vdo": "0"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            },
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "type": "block",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "vg_name": "ceph_vg1"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:        }
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:    ],
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:    "2": [
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:        {
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "devices": [
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "/dev/loop5"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            ],
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_name": "ceph_lv2",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_size": "21470642176",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "name": "ceph_lv2",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "tags": {
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.crush_device_class": "",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.encrypted": "0",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osd_id": "2",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.type": "block",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:                "ceph.vdo": "0"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            },
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "type": "block",
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:            "vg_name": "ceph_vg2"
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:        }
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]:    ]
Nov 29 00:10:06 np0005539482 brave_bardeen[96630]: }
Nov 29 00:10:06 np0005539482 systemd[1]: libpod-b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede.scope: Deactivated successfully.
Nov 29 00:10:06 np0005539482 podman[96655]: 2025-11-29 05:10:06.62257991 +0000 UTC m=+0.026337122 container died b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:06 np0005539482 systemd[1]: var-lib-containers-storage-overlay-98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe-merged.mount: Deactivated successfully.
Nov 29 00:10:06 np0005539482 podman[96655]: 2025-11-29 05:10:06.706484621 +0000 UTC m=+0.110241803 container remove b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:10:06 np0005539482 systemd[1]: libpod-conmon-b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede.scope: Deactivated successfully.
Nov 29 00:10:06 np0005539482 python3[96692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:06 np0005539482 podman[96718]: 2025-11-29 05:10:06.878818522 +0000 UTC m=+0.048809188 container create 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:10:06 np0005539482 systemd[1]: Started libpod-conmon-27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086.scope.
Nov 29 00:10:06 np0005539482 podman[96718]: 2025-11-29 05:10:06.850006011 +0000 UTC m=+0.019996647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:06 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f7cefc58315380a9f7a0e002759f3e7cbabf1d77e4e2ad770363f535fc6628/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f7cefc58315380a9f7a0e002759f3e7cbabf1d77e4e2ad770363f535fc6628/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:06 np0005539482 podman[96718]: 2025-11-29 05:10:06.973040023 +0000 UTC m=+0.143030759 container init 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:06 np0005539482 podman[96718]: 2025-11-29 05:10:06.983388075 +0000 UTC m=+0.153378701 container start 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:06 np0005539482 podman[96718]: 2025-11-29 05:10:06.98729996 +0000 UTC m=+0.157290616 container attach 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:10:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:07 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 00:10:07 np0005539482 podman[96872]: 2025-11-29 05:10:07.462517178 +0000 UTC m=+0.057716695 container create 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:07 np0005539482 systemd[1]: Started libpod-conmon-9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2.scope.
Nov 29 00:10:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 00:10:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 00:10:07 np0005539482 podman[96872]: 2025-11-29 05:10:07.433664337 +0000 UTC m=+0.028863854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:07 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:07 np0005539482 podman[96872]: 2025-11-29 05:10:07.546244434 +0000 UTC m=+0.141444041 container init 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:10:07 np0005539482 podman[96872]: 2025-11-29 05:10:07.55670805 +0000 UTC m=+0.151907607 container start 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:07 np0005539482 sad_greider[96889]: 167 167
Nov 29 00:10:07 np0005539482 systemd[1]: libpod-9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2.scope: Deactivated successfully.
Nov 29 00:10:07 np0005539482 podman[96872]: 2025-11-29 05:10:07.561883435 +0000 UTC m=+0.157083042 container attach 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:10:07 np0005539482 podman[96872]: 2025-11-29 05:10:07.563238688 +0000 UTC m=+0.158438245 container died 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:10:07 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e6d90e32844807701a095ef9f73b57c568453fe7533a44d35d06de6527c84390-merged.mount: Deactivated successfully.
Nov 29 00:10:07 np0005539482 podman[96872]: 2025-11-29 05:10:07.611123913 +0000 UTC m=+0.206323460 container remove 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:07 np0005539482 systemd[1]: libpod-conmon-9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2.scope: Deactivated successfully.
Nov 29 00:10:07 np0005539482 podman[96913]: 2025-11-29 05:10:07.837402366 +0000 UTC m=+0.061556098 container create 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:07 np0005539482 systemd[1]: Started libpod-conmon-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope.
Nov 29 00:10:07 np0005539482 podman[96913]: 2025-11-29 05:10:07.805294875 +0000 UTC m=+0.029448657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:07 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:07 np0005539482 podman[96913]: 2025-11-29 05:10:07.934299883 +0000 UTC m=+0.158453625 container init 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 00:10:07 np0005539482 podman[96913]: 2025-11-29 05:10:07.946345166 +0000 UTC m=+0.170498868 container start 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 00:10:07 np0005539482 podman[96913]: 2025-11-29 05:10:07.949539813 +0000 UTC m=+0.173693555 container attach 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:10:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 00:10:08 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 00:10:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 00:10:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 29 00:10:08 np0005539482 awesome_kirch[96782]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 00:10:08 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 29 00:10:08 np0005539482 systemd[1]: libpod-27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086.scope: Deactivated successfully.
Nov 29 00:10:08 np0005539482 podman[96718]: 2025-11-29 05:10:08.450669012 +0000 UTC m=+1.620659668 container died 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:08 np0005539482 systemd[1]: var-lib-containers-storage-overlay-41f7cefc58315380a9f7a0e002759f3e7cbabf1d77e4e2ad770363f535fc6628-merged.mount: Deactivated successfully.
Nov 29 00:10:08 np0005539482 podman[96718]: 2025-11-29 05:10:08.514917625 +0000 UTC m=+1.684908291 container remove 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:08 np0005539482 systemd[1]: libpod-conmon-27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086.scope: Deactivated successfully.
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]: {
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "osd_id": 0,
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "type": "bluestore"
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:    },
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "osd_id": 1,
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "type": "bluestore"
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:    },
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "osd_id": 2,
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:        "type": "bluestore"
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]:    }
Nov 29 00:10:08 np0005539482 epic_brahmagupta[96929]: }
Nov 29 00:10:09 np0005539482 systemd[1]: libpod-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope: Deactivated successfully.
Nov 29 00:10:09 np0005539482 podman[96913]: 2025-11-29 05:10:09.032136895 +0000 UTC m=+1.256290647 container died 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:09 np0005539482 systemd[1]: libpod-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope: Consumed 1.089s CPU time.
Nov 29 00:10:09 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8-merged.mount: Deactivated successfully.
Nov 29 00:10:09 np0005539482 podman[96913]: 2025-11-29 05:10:09.10386829 +0000 UTC m=+1.328022022 container remove 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:09 np0005539482 systemd[1]: libpod-conmon-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope: Deactivated successfully.
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:09 np0005539482 python3[97113]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:10:10 np0005539482 python3[97184]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393009.3111906-36560-92396070657346/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:10:10 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 00:10:10 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 00:10:10 np0005539482 ceph-mon[75176]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 00:10:10 np0005539482 ceph-mon[75176]: Cluster is now healthy
Nov 29 00:10:10 np0005539482 python3[97286]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:10:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:11 np0005539482 python3[97361]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393010.5569205-36574-1604101689895/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=4f2b0ec0c0a878c4af2a9002dc161de66516d501 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:10:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:10:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:10:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:10:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:10:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:10:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:10:11 np0005539482 python3[97411]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:11 np0005539482 podman[97412]: 2025-11-29 05:10:11.754102259 +0000 UTC m=+0.043410397 container create ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:11 np0005539482 systemd[1]: Started libpod-conmon-ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a.scope.
Nov 29 00:10:11 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:11 np0005539482 podman[97412]: 2025-11-29 05:10:11.732192616 +0000 UTC m=+0.021500784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:11 np0005539482 podman[97412]: 2025-11-29 05:10:11.849874068 +0000 UTC m=+0.139182276 container init ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:11 np0005539482 podman[97412]: 2025-11-29 05:10:11.861188443 +0000 UTC m=+0.150496571 container start ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:11 np0005539482 podman[97412]: 2025-11-29 05:10:11.865317484 +0000 UTC m=+0.154625702 container attach ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:10:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 00:10:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 00:10:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 00:10:12 np0005539482 exciting_allen[97428]: 
Nov 29 00:10:12 np0005539482 exciting_allen[97428]: [global]
Nov 29 00:10:12 np0005539482 exciting_allen[97428]: #011fsid = 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 00:10:12 np0005539482 exciting_allen[97428]: #011mon_host = 192.168.122.100
Nov 29 00:10:12 np0005539482 systemd[1]: libpod-ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a.scope: Deactivated successfully.
Nov 29 00:10:12 np0005539482 podman[97412]: 2025-11-29 05:10:12.432289374 +0000 UTC m=+0.721597522 container died ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:12 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 00:10:12 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 00:10:12 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d-merged.mount: Deactivated successfully.
Nov 29 00:10:12 np0005539482 podman[97412]: 2025-11-29 05:10:12.478843856 +0000 UTC m=+0.768151994 container remove ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:12 np0005539482 systemd[1]: libpod-conmon-ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a.scope: Deactivated successfully.
Nov 29 00:10:12 np0005539482 python3[97568]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:12 np0005539482 podman[97591]: 2025-11-29 05:10:12.837511919 +0000 UTC m=+0.067940273 container create 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:10:12 np0005539482 systemd[1]: Started libpod-conmon-8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145.scope.
Nov 29 00:10:12 np0005539482 podman[97591]: 2025-11-29 05:10:12.810048922 +0000 UTC m=+0.040477376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:12 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:12 np0005539482 podman[97591]: 2025-11-29 05:10:12.928163574 +0000 UTC m=+0.158591948 container init 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:12 np0005539482 podman[97591]: 2025-11-29 05:10:12.934317235 +0000 UTC m=+0.164745599 container start 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:12 np0005539482 podman[97591]: 2025-11-29 05:10:12.937485281 +0000 UTC m=+0.167913635 container attach 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:13 np0005539482 podman[97682]: 2025-11-29 05:10:13.174209569 +0000 UTC m=+0.046375689 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:13 np0005539482 podman[97682]: 2025-11-29 05:10:13.282645066 +0000 UTC m=+0.154811166 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3496001098' entity='client.admin' 
Nov 29 00:10:13 np0005539482 intelligent_volhard[97632]: set ssl_option
Nov 29 00:10:13 np0005539482 systemd[1]: libpod-8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145.scope: Deactivated successfully.
Nov 29 00:10:13 np0005539482 podman[97591]: 2025-11-29 05:10:13.553172206 +0000 UTC m=+0.783600560 container died 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:13 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b-merged.mount: Deactivated successfully.
Nov 29 00:10:13 np0005539482 podman[97591]: 2025-11-29 05:10:13.597591206 +0000 UTC m=+0.828019580 container remove 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:10:13 np0005539482 systemd[1]: libpod-conmon-8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145.scope: Deactivated successfully.
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:13 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 975121d3-e9ce-4516-b873-5b48dcdd0d7d does not exist
Nov 29 00:10:13 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev dfa1c105-c45b-43d3-b665-77b93fbcac6e does not exist
Nov 29 00:10:13 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev fac06290-7813-42f2-88bc-1cc6f4faef07 does not exist
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:13 np0005539482 python3[97859]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:13 np0005539482 podman[97908]: 2025-11-29 05:10:13.951716099 +0000 UTC m=+0.042865343 container create 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:13 np0005539482 systemd[1]: Started libpod-conmon-0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b.scope.
Nov 29 00:10:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 podman[97908]: 2025-11-29 05:10:13.934326587 +0000 UTC m=+0.025475851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:14 np0005539482 podman[97908]: 2025-11-29 05:10:14.040737875 +0000 UTC m=+0.131887169 container init 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:10:14 np0005539482 podman[97908]: 2025-11-29 05:10:14.048739789 +0000 UTC m=+0.139889033 container start 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:14 np0005539482 podman[97908]: 2025-11-29 05:10:14.052324466 +0000 UTC m=+0.143473720 container attach 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:14 np0005539482 podman[98036]: 2025-11-29 05:10:14.465963686 +0000 UTC m=+0.059326293 container create 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:14 np0005539482 systemd[1]: Started libpod-conmon-118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c.scope.
Nov 29 00:10:14 np0005539482 podman[98036]: 2025-11-29 05:10:14.435063095 +0000 UTC m=+0.028425732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/3496001098' entity='client.admin' 
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:14 np0005539482 podman[98036]: 2025-11-29 05:10:14.553414444 +0000 UTC m=+0.146777081 container init 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:14 np0005539482 podman[98036]: 2025-11-29 05:10:14.565860707 +0000 UTC m=+0.159223284 container start 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:14 np0005539482 epic_mestorf[98053]: 167 167
Nov 29 00:10:14 np0005539482 systemd[1]: libpod-118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c.scope: Deactivated successfully.
Nov 29 00:10:14 np0005539482 podman[98036]: 2025-11-29 05:10:14.571030702 +0000 UTC m=+0.164393299 container attach 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:14 np0005539482 podman[98036]: 2025-11-29 05:10:14.571622777 +0000 UTC m=+0.164985344 container died 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:10:14 np0005539482 systemd[1]: var-lib-containers-storage-overlay-36ebc5c31970d9ac6bb3a815bf02e2cecdb48a400fa8a812e298dc428491e5bb-merged.mount: Deactivated successfully.
Nov 29 00:10:14 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 29 00:10:14 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 00:10:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:14 np0005539482 vigilant_goldstine[97968]: Scheduled rgw.rgw update...
Nov 29 00:10:14 np0005539482 podman[98036]: 2025-11-29 05:10:14.608115104 +0000 UTC m=+0.201477681 container remove 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:10:14 np0005539482 systemd[1]: libpod-0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b.scope: Deactivated successfully.
Nov 29 00:10:14 np0005539482 podman[97908]: 2025-11-29 05:10:14.623420987 +0000 UTC m=+0.714570231 container died 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:14 np0005539482 systemd[1]: libpod-conmon-118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c.scope: Deactivated successfully.
Nov 29 00:10:14 np0005539482 systemd[1]: var-lib-containers-storage-overlay-8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739-merged.mount: Deactivated successfully.
Nov 29 00:10:14 np0005539482 podman[97908]: 2025-11-29 05:10:14.661518183 +0000 UTC m=+0.752667427 container remove 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:10:14 np0005539482 systemd[1]: libpod-conmon-0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b.scope: Deactivated successfully.
Nov 29 00:10:14 np0005539482 podman[98092]: 2025-11-29 05:10:14.747556256 +0000 UTC m=+0.033954337 container create 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:14 np0005539482 systemd[1]: Started libpod-conmon-8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e.scope.
Nov 29 00:10:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:14 np0005539482 podman[98092]: 2025-11-29 05:10:14.815371165 +0000 UTC m=+0.101769246 container init 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:10:14 np0005539482 podman[98092]: 2025-11-29 05:10:14.824485196 +0000 UTC m=+0.110883277 container start 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:14 np0005539482 podman[98092]: 2025-11-29 05:10:14.827996012 +0000 UTC m=+0.114394123 container attach 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:14 np0005539482 podman[98092]: 2025-11-29 05:10:14.732973751 +0000 UTC m=+0.019371842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:15 np0005539482 ceph-mon[75176]: Saving service rgw.rgw spec with placement compute-0
Nov 29 00:10:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:15 np0005539482 python3[98196]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:10:15 np0005539482 vibrant_wiles[98108]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:10:15 np0005539482 vibrant_wiles[98108]: --> relative data size: 1.0
Nov 29 00:10:15 np0005539482 vibrant_wiles[98108]: --> All data devices are unavailable
Nov 29 00:10:15 np0005539482 systemd[1]: libpod-8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e.scope: Deactivated successfully.
Nov 29 00:10:15 np0005539482 podman[98092]: 2025-11-29 05:10:15.857163314 +0000 UTC m=+1.143561405 container died 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:10:15 np0005539482 systemd[1]: var-lib-containers-storage-overlay-93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508-merged.mount: Deactivated successfully.
Nov 29 00:10:15 np0005539482 podman[98092]: 2025-11-29 05:10:15.912959801 +0000 UTC m=+1.199357902 container remove 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:15 np0005539482 systemd[1]: libpod-conmon-8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e.scope: Deactivated successfully.
Nov 29 00:10:15 np0005539482 python3[98283]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393015.3872905-36615-271709434322707/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:10:16 np0005539482 python3[98470]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:16 np0005539482 podman[98485]: 2025-11-29 05:10:16.442700085 +0000 UTC m=+0.029650612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:16 np0005539482 podman[98485]: 2025-11-29 05:10:16.692992764 +0000 UTC m=+0.279943241 container create 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:16 np0005539482 podman[98500]: 2025-11-29 05:10:16.727656437 +0000 UTC m=+0.266153475 container create aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:16 np0005539482 systemd[1]: Started libpod-conmon-3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e.scope.
Nov 29 00:10:16 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:16 np0005539482 systemd[1]: Started libpod-conmon-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope.
Nov 29 00:10:16 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:16 np0005539482 podman[98485]: 2025-11-29 05:10:16.779927958 +0000 UTC m=+0.366878475 container init 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:10:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:16 np0005539482 podman[98485]: 2025-11-29 05:10:16.796167253 +0000 UTC m=+0.383117690 container start 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:16 np0005539482 podman[98500]: 2025-11-29 05:10:16.797995347 +0000 UTC m=+0.336492405 container init aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:10:16 np0005539482 podman[98485]: 2025-11-29 05:10:16.803178463 +0000 UTC m=+0.390128930 container attach 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:10:16 np0005539482 festive_hawking[98515]: 167 167
Nov 29 00:10:16 np0005539482 systemd[1]: libpod-3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e.scope: Deactivated successfully.
Nov 29 00:10:16 np0005539482 podman[98500]: 2025-11-29 05:10:16.71136713 +0000 UTC m=+0.249864198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:16 np0005539482 podman[98485]: 2025-11-29 05:10:16.807700773 +0000 UTC m=+0.394651210 container died 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:10:16 np0005539482 podman[98500]: 2025-11-29 05:10:16.808955124 +0000 UTC m=+0.347452162 container start aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:10:16 np0005539482 podman[98500]: 2025-11-29 05:10:16.820684439 +0000 UTC m=+0.359181487 container attach aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:10:16 np0005539482 systemd[1]: var-lib-containers-storage-overlay-66e62aac7a47d5b6e6f2b715249ab0060d6cab42832b261abd99f6295e77ca0c-merged.mount: Deactivated successfully.
Nov 29 00:10:16 np0005539482 podman[98485]: 2025-11-29 05:10:16.848037634 +0000 UTC m=+0.434988071 container remove 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:10:16 np0005539482 systemd[1]: libpod-conmon-3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e.scope: Deactivated successfully.
Nov 29 00:10:17 np0005539482 podman[98543]: 2025-11-29 05:10:17.029955039 +0000 UTC m=+0.054565448 container create f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:10:17 np0005539482 systemd[1]: Started libpod-conmon-f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91.scope.
Nov 29 00:10:17 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:17 np0005539482 podman[98543]: 2025-11-29 05:10:17.007953654 +0000 UTC m=+0.032564103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:17 np0005539482 podman[98543]: 2025-11-29 05:10:17.117945009 +0000 UTC m=+0.142555468 container init f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 00:10:17 np0005539482 podman[98543]: 2025-11-29 05:10:17.12582737 +0000 UTC m=+0.150437769 container start f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 00:10:17 np0005539482 podman[98543]: 2025-11-29 05:10:17.129574732 +0000 UTC m=+0.154185211 container attach f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:10:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:10:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 00:10:17 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0[75172]: 2025-11-29T05:10:17.380+0000 7fad21b30640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e2 new map
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T05:10:17.381210+0000#012modified#0112025-11-29T05:10:17.381255+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 00:10:17 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 00:10:17 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 00:10:17 np0005539482 systemd[1]: libpod-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope: Deactivated successfully.
Nov 29 00:10:17 np0005539482 conmon[98520]: conmon aed017992dd71073d5de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope/container/memory.events
Nov 29 00:10:17 np0005539482 podman[98500]: 2025-11-29 05:10:17.435187545 +0000 UTC m=+0.973684583 container died aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:10:17 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0-merged.mount: Deactivated successfully.
Nov 29 00:10:17 np0005539482 podman[98500]: 2025-11-29 05:10:17.483796987 +0000 UTC m=+1.022294035 container remove aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:10:17 np0005539482 systemd[1]: libpod-conmon-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope: Deactivated successfully.
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 00:10:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:17 np0005539482 python3[98624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]: {
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:    "0": [
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:        {
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "devices": [
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "/dev/loop3"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            ],
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_name": "ceph_lv0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_size": "21470642176",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "name": "ceph_lv0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "tags": {
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.crush_device_class": "",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.encrypted": "0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osd_id": "0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.type": "block",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.vdo": "0"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            },
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "type": "block",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "vg_name": "ceph_vg0"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:        }
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:    ],
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:    "1": [
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:        {
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "devices": [
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "/dev/loop4"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            ],
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_name": "ceph_lv1",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_size": "21470642176",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "name": "ceph_lv1",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "tags": {
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.crush_device_class": "",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.encrypted": "0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osd_id": "1",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.type": "block",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.vdo": "0"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            },
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "type": "block",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "vg_name": "ceph_vg1"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:        }
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:    ],
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:    "2": [
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:        {
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "devices": [
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "/dev/loop5"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            ],
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_name": "ceph_lv2",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_size": "21470642176",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "name": "ceph_lv2",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "tags": {
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.crush_device_class": "",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.encrypted": "0",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osd_id": "2",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.type": "block",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:                "ceph.vdo": "0"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            },
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "type": "block",
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:            "vg_name": "ceph_vg2"
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:        }
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]:    ]
Nov 29 00:10:17 np0005539482 boring_rhodes[98560]: }
Nov 29 00:10:17 np0005539482 systemd[1]: libpod-f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91.scope: Deactivated successfully.
Nov 29 00:10:17 np0005539482 podman[98543]: 2025-11-29 05:10:17.911169882 +0000 UTC m=+0.935780311 container died f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:17 np0005539482 podman[98629]: 2025-11-29 05:10:17.943625231 +0000 UTC m=+0.064504570 container create 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 00:10:17 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524-merged.mount: Deactivated successfully.
Nov 29 00:10:17 np0005539482 podman[98543]: 2025-11-29 05:10:17.987782555 +0000 UTC m=+1.012392954 container remove f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:10:17 np0005539482 podman[98629]: 2025-11-29 05:10:17.905887363 +0000 UTC m=+0.026766762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:17 np0005539482 systemd[1]: Started libpod-conmon-803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba.scope.
Nov 29 00:10:18 np0005539482 systemd[1]: libpod-conmon-f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91.scope: Deactivated successfully.
Nov 29 00:10:18 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:18 np0005539482 podman[98629]: 2025-11-29 05:10:18.050756017 +0000 UTC m=+0.171635346 container init 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 00:10:18 np0005539482 podman[98629]: 2025-11-29 05:10:18.060732109 +0000 UTC m=+0.181611438 container start 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:10:18 np0005539482 podman[98629]: 2025-11-29 05:10:18.064186953 +0000 UTC m=+0.185066272 container attach 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:10:18 np0005539482 ceph-mgr[75473]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 00:10:18 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 00:10:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 00:10:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:18 np0005539482 youthful_hopper[98657]: Scheduled mds.cephfs update...
Nov 29 00:10:18 np0005539482 systemd[1]: libpod-803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba.scope: Deactivated successfully.
Nov 29 00:10:18 np0005539482 podman[98629]: 2025-11-29 05:10:18.670624743 +0000 UTC m=+0.791504052 container died 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:10:18 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc-merged.mount: Deactivated successfully.
Nov 29 00:10:18 np0005539482 podman[98629]: 2025-11-29 05:10:18.712393579 +0000 UTC m=+0.833272898 container remove 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:18 np0005539482 ceph-mon[75176]: Saving service mds.cephfs spec with placement compute-0
Nov 29 00:10:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:18 np0005539482 podman[98823]: 2025-11-29 05:10:18.736632009 +0000 UTC m=+0.055293166 container create d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:18 np0005539482 systemd[1]: libpod-conmon-803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba.scope: Deactivated successfully.
Nov 29 00:10:18 np0005539482 systemd[1]: Started libpod-conmon-d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4.scope.
Nov 29 00:10:18 np0005539482 podman[98823]: 2025-11-29 05:10:18.709683563 +0000 UTC m=+0.028344730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:18 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:18 np0005539482 podman[98823]: 2025-11-29 05:10:18.824909636 +0000 UTC m=+0.143570813 container init d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:10:18 np0005539482 podman[98823]: 2025-11-29 05:10:18.836633631 +0000 UTC m=+0.155294778 container start d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:18 np0005539482 podman[98823]: 2025-11-29 05:10:18.840647429 +0000 UTC m=+0.159308606 container attach d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 00:10:18 np0005539482 stoic_haibt[98851]: 167 167
Nov 29 00:10:18 np0005539482 systemd[1]: libpod-d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4.scope: Deactivated successfully.
Nov 29 00:10:18 np0005539482 podman[98823]: 2025-11-29 05:10:18.842120674 +0000 UTC m=+0.160781831 container died d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:10:18 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5f7ecfe192d3f691cfdf0e1a6e7d1244f16b7499cb00dac36d5f0013aad570f8-merged.mount: Deactivated successfully.
Nov 29 00:10:18 np0005539482 podman[98823]: 2025-11-29 05:10:18.887747965 +0000 UTC m=+0.206409142 container remove d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:18 np0005539482 systemd[1]: libpod-conmon-d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4.scope: Deactivated successfully.
Nov 29 00:10:19 np0005539482 podman[98873]: 2025-11-29 05:10:19.079544559 +0000 UTC m=+0.071665684 container create 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 00:10:19 np0005539482 systemd[1]: Started libpod-conmon-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope.
Nov 29 00:10:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:19 np0005539482 podman[98873]: 2025-11-29 05:10:19.053231559 +0000 UTC m=+0.045352754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:19 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:19 np0005539482 podman[98873]: 2025-11-29 05:10:19.207283786 +0000 UTC m=+0.199404921 container init 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:19 np0005539482 podman[98873]: 2025-11-29 05:10:19.219971395 +0000 UTC m=+0.212092520 container start 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:10:19 np0005539482 podman[98873]: 2025-11-29 05:10:19.223841699 +0000 UTC m=+0.215962854 container attach 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:19 np0005539482 python3[98972]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:10:19 np0005539482 ceph-mon[75176]: Saving service mds.cephfs spec with placement compute-0
Nov 29 00:10:19 np0005539482 python3[99045]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393019.1871386-36645-146956016647071/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=1cc9e4eb20e7af3f1c9d65ee54a3a3ef5b88c5e3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]: {
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "osd_id": 0,
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "type": "bluestore"
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:    },
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "osd_id": 1,
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "type": "bluestore"
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:    },
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "osd_id": 2,
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:        "type": "bluestore"
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]:    }
Nov 29 00:10:20 np0005539482 stoic_proskuriakova[98893]: }
Nov 29 00:10:20 np0005539482 systemd[1]: libpod-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope: Deactivated successfully.
Nov 29 00:10:20 np0005539482 systemd[1]: libpod-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope: Consumed 1.063s CPU time.
Nov 29 00:10:20 np0005539482 podman[98873]: 2025-11-29 05:10:20.270759942 +0000 UTC m=+1.262881057 container died 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:10:20 np0005539482 systemd[1]: var-lib-containers-storage-overlay-080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b-merged.mount: Deactivated successfully.
Nov 29 00:10:20 np0005539482 podman[98873]: 2025-11-29 05:10:20.326324273 +0000 UTC m=+1.318445388 container remove 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:10:20 np0005539482 systemd[1]: libpod-conmon-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope: Deactivated successfully.
Nov 29 00:10:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:20 np0005539482 python3[99158]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:20 np0005539482 podman[99234]: 2025-11-29 05:10:20.616410989 +0000 UTC m=+0.039064282 container create 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:20 np0005539482 systemd[1]: Started libpod-conmon-039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2.scope.
Nov 29 00:10:20 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afebc566f55ff033696361e26b4ab0d46a52091949440e52de19f4ebe38d1da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afebc566f55ff033696361e26b4ab0d46a52091949440e52de19f4ebe38d1da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:20 np0005539482 podman[99234]: 2025-11-29 05:10:20.690072251 +0000 UTC m=+0.112725584 container init 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 00:10:20 np0005539482 podman[99234]: 2025-11-29 05:10:20.599081688 +0000 UTC m=+0.021735001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:20 np0005539482 podman[99234]: 2025-11-29 05:10:20.698389553 +0000 UTC m=+0.121042866 container start 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:20 np0005539482 podman[99234]: 2025-11-29 05:10:20.702677537 +0000 UTC m=+0.125330840 container attach 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:20 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:20 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:21 np0005539482 podman[99395]: 2025-11-29 05:10:21.205032676 +0000 UTC m=+0.071360767 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 00:10:21 np0005539482 systemd[1]: libpod-039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2.scope: Deactivated successfully.
Nov 29 00:10:21 np0005539482 podman[99234]: 2025-11-29 05:10:21.31049798 +0000 UTC m=+0.733151273 container died 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:21 np0005539482 podman[99395]: 2025-11-29 05:10:21.323755873 +0000 UTC m=+0.190083954 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:10:21 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4afebc566f55ff033696361e26b4ab0d46a52091949440e52de19f4ebe38d1da-merged.mount: Deactivated successfully.
Nov 29 00:10:21 np0005539482 podman[99234]: 2025-11-29 05:10:21.363726115 +0000 UTC m=+0.786379408 container remove 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:10:21 np0005539482 systemd[1]: libpod-conmon-039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2.scope: Deactivated successfully.
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:21 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ccda369d-5c35-4153-a56b-088eaca9b871 does not exist
Nov 29 00:10:21 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 45cbdfcd-5e73-4a47-9ce2-8b5b951ba83f does not exist
Nov 29 00:10:21 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6a7dc500-a9c3-4c1f-b91f-1ab31312f442 does not exist
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:22 np0005539482 python3[99628]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:22 np0005539482 podman[99656]: 2025-11-29 05:10:22.258734713 +0000 UTC m=+0.043923079 container create 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:22 np0005539482 systemd[1]: Started libpod-conmon-07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc.scope.
Nov 29 00:10:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35affc3ccae3c6c331213770449e3068531e63840c368792f8d662ccaa131e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35affc3ccae3c6c331213770449e3068531e63840c368792f8d662ccaa131e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:22 np0005539482 podman[99656]: 2025-11-29 05:10:22.239684061 +0000 UTC m=+0.024872467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:22 np0005539482 podman[99656]: 2025-11-29 05:10:22.342665735 +0000 UTC m=+0.127854131 container init 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:22 np0005539482 podman[99656]: 2025-11-29 05:10:22.350105055 +0000 UTC m=+0.135293431 container start 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:22 np0005539482 podman[99656]: 2025-11-29 05:10:22.363183034 +0000 UTC m=+0.148371430 container attach 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:22 np0005539482 podman[99715]: 2025-11-29 05:10:22.496615809 +0000 UTC m=+0.041306785 container create f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 00:10:22 np0005539482 systemd[1]: Started libpod-conmon-f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e.scope.
Nov 29 00:10:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:22 np0005539482 podman[99715]: 2025-11-29 05:10:22.479087423 +0000 UTC m=+0.023778379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 00:10:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878632264' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 00:10:23 np0005539482 stupefied_thompson[99682]: 
Nov 29 00:10:23 np0005539482 stupefied_thompson[99682]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":149,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":29,"num_osds":3,"num_up_osds":3,"osd_up_since":1764392994,"num_in_osds":3,"osd_in_since":1764392965,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83767296,"bytes_avail":64328159232,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T05:09:43.260960+0000","services":{}},"progress_events":{}}
Nov 29 00:10:23 np0005539482 podman[99715]: 2025-11-29 05:10:23.26097519 +0000 UTC m=+0.805666126 container init f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:23 np0005539482 podman[99715]: 2025-11-29 05:10:23.267028517 +0000 UTC m=+0.811719453 container start f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:10:23 np0005539482 bold_lovelace[99731]: 167 167
Nov 29 00:10:23 np0005539482 systemd[1]: libpod-f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e.scope: Deactivated successfully.
Nov 29 00:10:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:23 np0005539482 podman[99715]: 2025-11-29 05:10:23.276485018 +0000 UTC m=+0.821175974 container attach f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:10:23 np0005539482 podman[99715]: 2025-11-29 05:10:23.276813975 +0000 UTC m=+0.821504911 container died f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:10:23 np0005539482 systemd[1]: libpod-07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc.scope: Deactivated successfully.
Nov 29 00:10:23 np0005539482 podman[99656]: 2025-11-29 05:10:23.285935898 +0000 UTC m=+1.071124264 container died 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:23 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4f650ffe984d44ec4ef644c2787a44e9ee2cc13480aba9011f54331be3322581-merged.mount: Deactivated successfully.
Nov 29 00:10:23 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a35affc3ccae3c6c331213770449e3068531e63840c368792f8d662ccaa131e1-merged.mount: Deactivated successfully.
Nov 29 00:10:23 np0005539482 podman[99715]: 2025-11-29 05:10:23.322669981 +0000 UTC m=+0.867360917 container remove f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:10:23 np0005539482 systemd[1]: libpod-conmon-f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e.scope: Deactivated successfully.
Nov 29 00:10:23 np0005539482 podman[99656]: 2025-11-29 05:10:23.364744154 +0000 UTC m=+1.149932530 container remove 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:23 np0005539482 systemd[1]: libpod-conmon-07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc.scope: Deactivated successfully.
Nov 29 00:10:23 np0005539482 podman[99787]: 2025-11-29 05:10:23.496435207 +0000 UTC m=+0.062587574 container create 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:23 np0005539482 systemd[1]: Started libpod-conmon-43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe.scope.
Nov 29 00:10:23 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:23 np0005539482 podman[99787]: 2025-11-29 05:10:23.474516653 +0000 UTC m=+0.040669070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:23 np0005539482 podman[99787]: 2025-11-29 05:10:23.583663438 +0000 UTC m=+0.149815805 container init 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 00:10:23 np0005539482 podman[99787]: 2025-11-29 05:10:23.596959892 +0000 UTC m=+0.163112269 container start 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:10:23 np0005539482 podman[99787]: 2025-11-29 05:10:23.600416405 +0000 UTC m=+0.166568782 container attach 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:23 np0005539482 python3[99832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:23 np0005539482 podman[99835]: 2025-11-29 05:10:23.850511359 +0000 UTC m=+0.069516032 container create 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:10:23 np0005539482 systemd[1]: Started libpod-conmon-86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad.scope.
Nov 29 00:10:23 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8a4b601b8d6c194625dcadaa448a824937b576a93e047c64d75d0b18cc189/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8a4b601b8d6c194625dcadaa448a824937b576a93e047c64d75d0b18cc189/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:23 np0005539482 podman[99835]: 2025-11-29 05:10:23.818423548 +0000 UTC m=+0.037428271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:23 np0005539482 podman[99835]: 2025-11-29 05:10:23.918420451 +0000 UTC m=+0.137425114 container init 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:23 np0005539482 podman[99835]: 2025-11-29 05:10:23.930994376 +0000 UTC m=+0.149999009 container start 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:10:23 np0005539482 podman[99835]: 2025-11-29 05:10:23.935017804 +0000 UTC m=+0.154022517 container attach 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:10:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424060983' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:10:24 np0005539482 friendly_golick[99850]: 
Nov 29 00:10:24 np0005539482 friendly_golick[99850]: {"epoch":1,"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","modified":"2025-11-29T05:07:49.180526Z","created":"2025-11-29T05:07:49.180526Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 29 00:10:24 np0005539482 friendly_golick[99850]: dumped monmap epoch 1
Nov 29 00:10:24 np0005539482 systemd[1]: libpod-86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad.scope: Deactivated successfully.
Nov 29 00:10:24 np0005539482 podman[99835]: 2025-11-29 05:10:24.613344383 +0000 UTC m=+0.832349186 container died 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:10:24 np0005539482 eager_thompson[99828]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:10:24 np0005539482 eager_thompson[99828]: --> relative data size: 1.0
Nov 29 00:10:24 np0005539482 eager_thompson[99828]: --> All data devices are unavailable
Nov 29 00:10:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bdf8a4b601b8d6c194625dcadaa448a824937b576a93e047c64d75d0b18cc189-merged.mount: Deactivated successfully.
Nov 29 00:10:24 np0005539482 systemd[1]: libpod-43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe.scope: Deactivated successfully.
Nov 29 00:10:24 np0005539482 podman[99835]: 2025-11-29 05:10:24.665715876 +0000 UTC m=+0.884720509 container remove 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:24 np0005539482 podman[99787]: 2025-11-29 05:10:24.666786092 +0000 UTC m=+1.232938459 container died 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:24 np0005539482 systemd[1]: libpod-conmon-86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad.scope: Deactivated successfully.
Nov 29 00:10:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4-merged.mount: Deactivated successfully.
Nov 29 00:10:24 np0005539482 podman[99787]: 2025-11-29 05:10:24.714275047 +0000 UTC m=+1.280427414 container remove 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:10:24 np0005539482 systemd[1]: libpod-conmon-43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe.scope: Deactivated successfully.
Nov 29 00:10:25 np0005539482 python3[100049]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:25 np0005539482 podman[100084]: 2025-11-29 05:10:25.264987571 +0000 UTC m=+0.032877890 container create 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:10:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:25 np0005539482 systemd[1]: Started libpod-conmon-2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9.scope.
Nov 29 00:10:25 np0005539482 podman[100098]: 2025-11-29 05:10:25.309222318 +0000 UTC m=+0.043425437 container create d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:10:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:25 np0005539482 systemd[1]: Started libpod-conmon-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope.
Nov 29 00:10:25 np0005539482 podman[100084]: 2025-11-29 05:10:25.338050538 +0000 UTC m=+0.105940897 container init 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:10:25 np0005539482 podman[100084]: 2025-11-29 05:10:25.346966426 +0000 UTC m=+0.114856775 container start 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:25 np0005539482 podman[100084]: 2025-11-29 05:10:25.250854208 +0000 UTC m=+0.018744547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:25 np0005539482 sharp_boyd[100113]: 167 167
Nov 29 00:10:25 np0005539482 systemd[1]: libpod-2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9.scope: Deactivated successfully.
Nov 29 00:10:25 np0005539482 podman[100084]: 2025-11-29 05:10:25.350619654 +0000 UTC m=+0.118509983 container attach 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:25 np0005539482 podman[100084]: 2025-11-29 05:10:25.353885544 +0000 UTC m=+0.121775873 container died 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6ae5cfde459b6841111cd98615ec76df7fcdb5fdc025ed54f5a25aa5ebe88b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6ae5cfde459b6841111cd98615ec76df7fcdb5fdc025ed54f5a25aa5ebe88b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:25 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9e0782cd66a0fb0b8912eb61acec9cd4090b549be54098dfff2e56e108bf8cf7-merged.mount: Deactivated successfully.
Nov 29 00:10:25 np0005539482 podman[100098]: 2025-11-29 05:10:25.292668185 +0000 UTC m=+0.026871324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:25 np0005539482 podman[100084]: 2025-11-29 05:10:25.398197632 +0000 UTC m=+0.166087951 container remove 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 00:10:25 np0005539482 podman[100098]: 2025-11-29 05:10:25.414761415 +0000 UTC m=+0.148964534 container init d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:25 np0005539482 podman[100098]: 2025-11-29 05:10:25.421921729 +0000 UTC m=+0.156124848 container start d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:25 np0005539482 systemd[1]: libpod-conmon-2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9.scope: Deactivated successfully.
Nov 29 00:10:25 np0005539482 podman[100098]: 2025-11-29 05:10:25.425300571 +0000 UTC m=+0.159503700 container attach d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:25 np0005539482 podman[100142]: 2025-11-29 05:10:25.561250298 +0000 UTC m=+0.047434225 container create 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:25 np0005539482 systemd[1]: Started libpod-conmon-5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95.scope.
Nov 29 00:10:25 np0005539482 podman[100142]: 2025-11-29 05:10:25.540171845 +0000 UTC m=+0.026355772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:25 np0005539482 podman[100142]: 2025-11-29 05:10:25.680414926 +0000 UTC m=+0.166598873 container init 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:10:25 np0005539482 podman[100142]: 2025-11-29 05:10:25.686251788 +0000 UTC m=+0.172435685 container start 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:10:25 np0005539482 podman[100142]: 2025-11-29 05:10:25.69004741 +0000 UTC m=+0.176231387 container attach 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 00:10:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1191791665' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 00:10:26 np0005539482 fervent_swartz[100118]: [client.openstack]
Nov 29 00:10:26 np0005539482 fervent_swartz[100118]: #011key = AQCLfyppAAAAABAAXOcH7jxI2CDW0wmPcSvJrA==
Nov 29 00:10:26 np0005539482 fervent_swartz[100118]: #011caps mgr = "allow *"
Nov 29 00:10:26 np0005539482 fervent_swartz[100118]: #011caps mon = "profile rbd"
Nov 29 00:10:26 np0005539482 fervent_swartz[100118]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 00:10:26 np0005539482 systemd[1]: libpod-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope: Deactivated successfully.
Nov 29 00:10:26 np0005539482 conmon[100118]: conmon d080f1f1b70f2ce814d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope/container/memory.events
Nov 29 00:10:26 np0005539482 podman[100184]: 2025-11-29 05:10:26.075179037 +0000 UTC m=+0.030698067 container died d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:26 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ab6ae5cfde459b6841111cd98615ec76df7fcdb5fdc025ed54f5a25aa5ebe88b-merged.mount: Deactivated successfully.
Nov 29 00:10:26 np0005539482 podman[100184]: 2025-11-29 05:10:26.119729721 +0000 UTC m=+0.075248771 container remove d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:26 np0005539482 systemd[1]: libpod-conmon-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope: Deactivated successfully.
Nov 29 00:10:26 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1191791665' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]: {
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:    "0": [
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:        {
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "devices": [
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "/dev/loop3"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            ],
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_name": "ceph_lv0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_size": "21470642176",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "name": "ceph_lv0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "tags": {
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.crush_device_class": "",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.encrypted": "0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osd_id": "0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.type": "block",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.vdo": "0"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            },
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "type": "block",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "vg_name": "ceph_vg0"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:        }
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:    ],
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:    "1": [
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:        {
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "devices": [
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "/dev/loop4"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            ],
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_name": "ceph_lv1",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_size": "21470642176",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "name": "ceph_lv1",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "tags": {
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.crush_device_class": "",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.encrypted": "0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osd_id": "1",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.type": "block",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.vdo": "0"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            },
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "type": "block",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "vg_name": "ceph_vg1"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:        }
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:    ],
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:    "2": [
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:        {
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "devices": [
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "/dev/loop5"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            ],
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_name": "ceph_lv2",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_size": "21470642176",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "name": "ceph_lv2",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "tags": {
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.crush_device_class": "",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.encrypted": "0",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osd_id": "2",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.type": "block",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:                "ceph.vdo": "0"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            },
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "type": "block",
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:            "vg_name": "ceph_vg2"
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:        }
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]:    ]
Nov 29 00:10:26 np0005539482 sleepy_noether[100158]: }
Nov 29 00:10:26 np0005539482 systemd[1]: libpod-5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95.scope: Deactivated successfully.
Nov 29 00:10:26 np0005539482 podman[100142]: 2025-11-29 05:10:26.479717417 +0000 UTC m=+0.965901304 container died 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:10:26 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3-merged.mount: Deactivated successfully.
Nov 29 00:10:26 np0005539482 podman[100142]: 2025-11-29 05:10:26.538158158 +0000 UTC m=+1.024342045 container remove 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:10:26 np0005539482 systemd[1]: libpod-conmon-5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95.scope: Deactivated successfully.
Nov 29 00:10:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:27 np0005539482 podman[100356]: 2025-11-29 05:10:27.306352752 +0000 UTC m=+0.058025092 container create 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:27 np0005539482 systemd[1]: Started libpod-conmon-46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c.scope.
Nov 29 00:10:27 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:27 np0005539482 podman[100356]: 2025-11-29 05:10:27.288801925 +0000 UTC m=+0.040474235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:27 np0005539482 podman[100356]: 2025-11-29 05:10:27.399922988 +0000 UTC m=+0.151595378 container init 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:27 np0005539482 podman[100356]: 2025-11-29 05:10:27.407218795 +0000 UTC m=+0.158891135 container start 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:27 np0005539482 podman[100356]: 2025-11-29 05:10:27.411055979 +0000 UTC m=+0.162728329 container attach 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:27 np0005539482 ecstatic_liskov[100413]: 167 167
Nov 29 00:10:27 np0005539482 systemd[1]: libpod-46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c.scope: Deactivated successfully.
Nov 29 00:10:27 np0005539482 podman[100356]: 2025-11-29 05:10:27.412728699 +0000 UTC m=+0.164401109 container died 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:27 np0005539482 systemd[1]: var-lib-containers-storage-overlay-34ad1d2accd645dbd68f810ba9dc3b4b17729dbac0138d76734a5e001d880387-merged.mount: Deactivated successfully.
Nov 29 00:10:27 np0005539482 podman[100356]: 2025-11-29 05:10:27.46948095 +0000 UTC m=+0.221153260 container remove 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:10:27 np0005539482 systemd[1]: libpod-conmon-46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c.scope: Deactivated successfully.
Nov 29 00:10:27 np0005539482 podman[100495]: 2025-11-29 05:10:27.606579004 +0000 UTC m=+0.038751133 container create afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:27 np0005539482 systemd[1]: Started libpod-conmon-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope.
Nov 29 00:10:27 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:27 np0005539482 podman[100495]: 2025-11-29 05:10:27.587875429 +0000 UTC m=+0.020047568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:27 np0005539482 podman[100495]: 2025-11-29 05:10:27.693841256 +0000 UTC m=+0.126013395 container init afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:27 np0005539482 podman[100495]: 2025-11-29 05:10:27.701479502 +0000 UTC m=+0.133651621 container start afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:10:27 np0005539482 podman[100495]: 2025-11-29 05:10:27.704235129 +0000 UTC m=+0.136407248 container attach afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:27 np0005539482 ansible-async_wrapper.py[100568]: Invoked with j606962052817 30 /home/zuul/.ansible/tmp/ansible-tmp-1764393027.2831461-36717-36707949816144/AnsiballZ_command.py _
Nov 29 00:10:27 np0005539482 ansible-async_wrapper.py[100571]: Starting module and watcher
Nov 29 00:10:27 np0005539482 ansible-async_wrapper.py[100571]: Start watching 100572 (30)
Nov 29 00:10:27 np0005539482 ansible-async_wrapper.py[100572]: Start module (100572)
Nov 29 00:10:27 np0005539482 ansible-async_wrapper.py[100568]: Return async_wrapper task started.
Nov 29 00:10:28 np0005539482 python3[100573]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:28 np0005539482 podman[100574]: 2025-11-29 05:10:28.071556883 +0000 UTC m=+0.052150360 container create 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:28 np0005539482 systemd[1]: Started libpod-conmon-364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f.scope.
Nov 29 00:10:28 np0005539482 podman[100574]: 2025-11-29 05:10:28.046205497 +0000 UTC m=+0.026799034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff36e0e9ec2dced2154c9c1b43a27dc3be93cbf4691fc9edd8dee78f632c1d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff36e0e9ec2dced2154c9c1b43a27dc3be93cbf4691fc9edd8dee78f632c1d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:28 np0005539482 podman[100574]: 2025-11-29 05:10:28.168831639 +0000 UTC m=+0.149425206 container init 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:28 np0005539482 podman[100574]: 2025-11-29 05:10:28.182731457 +0000 UTC m=+0.163324964 container start 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:28 np0005539482 podman[100574]: 2025-11-29 05:10:28.188523608 +0000 UTC m=+0.169117125 container attach 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]: {
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "osd_id": 0,
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "type": "bluestore"
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:    },
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "osd_id": 1,
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "type": "bluestore"
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:    },
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "osd_id": 2,
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:        "type": "bluestore"
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]:    }
Nov 29 00:10:28 np0005539482 kind_rosalind[100536]: }
Nov 29 00:10:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 00:10:28 np0005539482 interesting_satoshi[100589]: 
Nov 29 00:10:28 np0005539482 interesting_satoshi[100589]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 00:10:28 np0005539482 systemd[1]: libpod-364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f.scope: Deactivated successfully.
Nov 29 00:10:28 np0005539482 podman[100574]: 2025-11-29 05:10:28.727400975 +0000 UTC m=+0.707994492 container died 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:28 np0005539482 systemd[1]: libpod-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope: Deactivated successfully.
Nov 29 00:10:28 np0005539482 systemd[1]: libpod-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope: Consumed 1.052s CPU time.
Nov 29 00:10:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay-dff36e0e9ec2dced2154c9c1b43a27dc3be93cbf4691fc9edd8dee78f632c1d1-merged.mount: Deactivated successfully.
Nov 29 00:10:28 np0005539482 podman[100574]: 2025-11-29 05:10:28.783974351 +0000 UTC m=+0.764567828 container remove 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:28 np0005539482 systemd[1]: libpod-conmon-364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f.scope: Deactivated successfully.
Nov 29 00:10:28 np0005539482 podman[100649]: 2025-11-29 05:10:28.808891345 +0000 UTC m=+0.041104039 container died afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:28 np0005539482 ansible-async_wrapper.py[100572]: Module complete (100572)
Nov 29 00:10:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818-merged.mount: Deactivated successfully.
Nov 29 00:10:28 np0005539482 podman[100649]: 2025-11-29 05:10:28.876826503 +0000 UTC m=+0.109039117 container remove afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:28 np0005539482 systemd[1]: libpod-conmon-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope: Deactivated successfully.
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:28 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 124d89dd-391b-4b34-9945-16d1dcae5fd1 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:28 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dwtrck on compute-0
Nov 29 00:10:28 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dwtrck on compute-0
Nov 29 00:10:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:29 np0005539482 python3[100764]: ansible-ansible.legacy.async_status Invoked with jid=j606962052817.100568 mode=status _async_dir=/root/.ansible_async
Nov 29 00:10:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:29 np0005539482 python3[100876]: ansible-ansible.legacy.async_status Invoked with jid=j606962052817.100568 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 00:10:29 np0005539482 podman[100905]: 2025-11-29 05:10:29.64871347 +0000 UTC m=+0.054875869 container create a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:10:29 np0005539482 systemd[1]: Started libpod-conmon-a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e.scope.
Nov 29 00:10:29 np0005539482 podman[100905]: 2025-11-29 05:10:29.618941436 +0000 UTC m=+0.025103875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:29 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:29 np0005539482 podman[100905]: 2025-11-29 05:10:29.74254156 +0000 UTC m=+0.148704039 container init a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:10:29 np0005539482 podman[100905]: 2025-11-29 05:10:29.756774207 +0000 UTC m=+0.162936566 container start a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:10:29 np0005539482 podman[100905]: 2025-11-29 05:10:29.760611468 +0000 UTC m=+0.166773917 container attach a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 29 00:10:29 np0005539482 nervous_meitner[100921]: 167 167
Nov 29 00:10:29 np0005539482 systemd[1]: libpod-a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e.scope: Deactivated successfully.
Nov 29 00:10:29 np0005539482 podman[100905]: 2025-11-29 05:10:29.764638604 +0000 UTC m=+0.170801033 container died a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:10:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-46677b3988112b0e01d31b364b7e9a83f10c8413d4ab13c6412789d90aaa225a-merged.mount: Deactivated successfully.
Nov 29 00:10:29 np0005539482 podman[100905]: 2025-11-29 05:10:29.817452534 +0000 UTC m=+0.223614923 container remove a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:29 np0005539482 systemd[1]: libpod-conmon-a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e.scope: Deactivated successfully.
Nov 29 00:10:29 np0005539482 systemd[1]: Reloading.
Nov 29 00:10:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 00:10:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 00:10:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:29 np0005539482 ceph-mon[75176]: Deploying daemon rgw.rgw.compute-0.dwtrck on compute-0
Nov 29 00:10:29 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:10:29 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:10:30 np0005539482 systemd[1]: Reloading.
Nov 29 00:10:30 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:10:30 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:10:30 np0005539482 python3[101002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:30 np0005539482 podman[101041]: 2025-11-29 05:10:30.384244596 +0000 UTC m=+0.048762264 container create 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:30 np0005539482 podman[101041]: 2025-11-29 05:10:30.36032723 +0000 UTC m=+0.024844928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:30 np0005539482 systemd[1]: Started libpod-conmon-97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929.scope.
Nov 29 00:10:30 np0005539482 systemd[1]: Starting Ceph rgw.rgw.compute-0.dwtrck for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:10:30 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1908ffe131b7552f48f85ca272914ea130999d9c76e0f899bd4d00f080f8e1a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1908ffe131b7552f48f85ca272914ea130999d9c76e0f899bd4d00f080f8e1a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:30 np0005539482 podman[101041]: 2025-11-29 05:10:30.50312817 +0000 UTC m=+0.167645838 container init 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:10:30 np0005539482 podman[101041]: 2025-11-29 05:10:30.519330524 +0000 UTC m=+0.183848192 container start 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:30 np0005539482 podman[101041]: 2025-11-29 05:10:30.523004441 +0000 UTC m=+0.187522139 container attach 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:10:30 np0005539482 podman[101111]: 2025-11-29 05:10:30.697679464 +0000 UTC m=+0.057958592 container create bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:10:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dwtrck supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:30 np0005539482 podman[101111]: 2025-11-29 05:10:30.755983224 +0000 UTC m=+0.116262382 container init bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:30 np0005539482 podman[101111]: 2025-11-29 05:10:30.760810408 +0000 UTC m=+0.121089536 container start bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:10:30 np0005539482 bash[101111]: bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970
Nov 29 00:10:30 np0005539482 podman[101111]: 2025-11-29 05:10:30.679652987 +0000 UTC m=+0.039932135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:30 np0005539482 systemd[1]: Started Ceph rgw.rgw.compute-0.dwtrck for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:10:30 np0005539482 radosgw[101131]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:10:30 np0005539482 radosgw[101131]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 00:10:30 np0005539482 radosgw[101131]: framework: beast
Nov 29 00:10:30 np0005539482 radosgw[101131]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 00:10:30 np0005539482 radosgw[101131]: init_numa not setting numa affinity
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 124d89dd-391b-4b34-9945-16d1dcae5fd1 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 00:10:30 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 124d89dd-391b-4b34-9945-16d1dcae5fd1 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 29 00:10:30 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 29 00:10:30 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev e2fe9c50-bb63-4196-ba25-35b29159b9ea (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:30 np0005539482 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.mjtuko on compute-0
Nov 29 00:10:30 np0005539482 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.mjtuko on compute-0
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 00:10:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 00:10:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 00:10:31 np0005539482 determined_nobel[101058]: 
Nov 29 00:10:31 np0005539482 determined_nobel[101058]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 00:10:31 np0005539482 systemd[1]: libpod-97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929.scope: Deactivated successfully.
Nov 29 00:10:31 np0005539482 podman[101041]: 2025-11-29 05:10:31.117786076 +0000 UTC m=+0.782303764 container died 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 00:10:31 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1908ffe131b7552f48f85ca272914ea130999d9c76e0f899bd4d00f080f8e1a2-merged.mount: Deactivated successfully.
Nov 29 00:10:31 np0005539482 podman[101041]: 2025-11-29 05:10:31.171572969 +0000 UTC m=+0.836090637 container remove 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:31 np0005539482 systemd[1]: libpod-conmon-97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929.scope: Deactivated successfully.
Nov 29 00:10:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:31 np0005539482 ceph-mgr[75473]: [progress INFO root] Writing back 4 completed events
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:31 np0005539482 podman[101368]: 2025-11-29 05:10:31.591764563 +0000 UTC m=+0.068880681 container create a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:10:31 np0005539482 systemd[1]: Started libpod-conmon-a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc.scope.
Nov 29 00:10:31 np0005539482 podman[101368]: 2025-11-29 05:10:31.566817543 +0000 UTC m=+0.043933661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:31 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:31 np0005539482 podman[101368]: 2025-11-29 05:10:31.687893998 +0000 UTC m=+0.165010076 container init a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:10:31 np0005539482 podman[101368]: 2025-11-29 05:10:31.695557929 +0000 UTC m=+0.172674007 container start a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:10:31 np0005539482 podman[101368]: 2025-11-29 05:10:31.700184459 +0000 UTC m=+0.177300567 container attach a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:31 np0005539482 laughing_ptolemy[101384]: 167 167
Nov 29 00:10:31 np0005539482 systemd[1]: libpod-a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc.scope: Deactivated successfully.
Nov 29 00:10:31 np0005539482 podman[101368]: 2025-11-29 05:10:31.70402784 +0000 UTC m=+0.181143958 container died a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:31 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6913ee9b0fbb5408668bce852dfb081e5e3292d8f5361e05f5330e63c04a37c3-merged.mount: Deactivated successfully.
Nov 29 00:10:31 np0005539482 podman[101368]: 2025-11-29 05:10:31.757152607 +0000 UTC m=+0.234268725 container remove a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:31 np0005539482 systemd[1]: libpod-conmon-a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc.scope: Deactivated successfully.
Nov 29 00:10:31 np0005539482 systemd[1]: Reloading.
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 00:10:31 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:10:31 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: Saving service rgw.rgw spec with placement compute-0
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: Deploying daemon mds.cephfs.compute-0.mjtuko on compute-0
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:31 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 00:10:32 np0005539482 systemd[1]: Reloading.
Nov 29 00:10:32 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:10:32 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:10:32 np0005539482 python3[101465]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:32 np0005539482 podman[101504]: 2025-11-29 05:10:32.394373047 +0000 UTC m=+0.063499864 container create 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:10:32 np0005539482 podman[101504]: 2025-11-29 05:10:32.364403908 +0000 UTC m=+0.033530785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:32 np0005539482 systemd[1]: Started libpod-conmon-6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909.scope.
Nov 29 00:10:32 np0005539482 systemd[1]: Starting Ceph mds.cephfs.compute-0.mjtuko for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 00:10:32 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838d98921549cf1341840f9c81150c12264c84cd026ba9271e96f71858be2a6f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838d98921549cf1341840f9c81150c12264c84cd026ba9271e96f71858be2a6f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:32 np0005539482 podman[101504]: 2025-11-29 05:10:32.51702915 +0000 UTC m=+0.186155947 container init 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:32 np0005539482 podman[101504]: 2025-11-29 05:10:32.529381472 +0000 UTC m=+0.198508269 container start 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:10:32 np0005539482 podman[101504]: 2025-11-29 05:10:32.533459518 +0000 UTC m=+0.202586375 container attach 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:32 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 30 pg[8.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:32 np0005539482 podman[101573]: 2025-11-29 05:10:32.699200881 +0000 UTC m=+0.037963519 container create cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.mjtuko supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:32 np0005539482 podman[101573]: 2025-11-29 05:10:32.757369747 +0000 UTC m=+0.096132405 container init cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:10:32 np0005539482 podman[101573]: 2025-11-29 05:10:32.763288488 +0000 UTC m=+0.102051126 container start cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:32 np0005539482 bash[101573]: cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416
Nov 29 00:10:32 np0005539482 podman[101573]: 2025-11-29 05:10:32.681654675 +0000 UTC m=+0.020417333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:32 np0005539482 systemd[1]: Started Ceph mds.cephfs.compute-0.mjtuko for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 00:10:32 np0005539482 ceph-mds[101593]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:10:32 np0005539482 ceph-mds[101593]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 00:10:32 np0005539482 ceph-mds[101593]: main not setting numa affinity
Nov 29 00:10:32 np0005539482 ceph-mds[101593]: pidfile_write: ignore empty --pid-file
Nov 29 00:10:32 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko[101589]: starting mds.cephfs.compute-0.mjtuko at 
Nov 29 00:10:32 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 2 from mon.0
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev e2fe9c50-bb63-4196-ba25-35b29159b9ea (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 00:10:32 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event e2fe9c50-bb63-4196-ba25-35b29159b9ea (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 00:10:32 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ansible-async_wrapper.py[100571]: Done in kid B.
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 new map
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T05:10:17.381210+0000#012modified#0112025-11-29T05:10:17.381255+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.mjtuko{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 00:10:32 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 3 from mon.0
Nov 29 00:10:32 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Monitors have assigned me to become a standby.
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] up:boot
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] as mds.0
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mjtuko assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.mjtuko"} v 0) v1
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.mjtuko"}]: dispatch
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e4 new map
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T05:10:17.381210+0000#012modified#0112025-11-29T05:10:32.991647+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.mjtuko{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 4 from mon.0
Nov 29 00:10:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mjtuko=up:creating}
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x1
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x100
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x600
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x601
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x602
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x603
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x604
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x605
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x606
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x607
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x608
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x609
Nov 29 00:10:33 np0005539482 ceph-mds[101593]: mds.0.4 creating_done
Nov 29 00:10:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mjtuko is now active in filesystem cephfs as rank 0
Nov 29 00:10:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 00:10:33 np0005539482 agitated_nobel[101521]: 
Nov 29 00:10:33 np0005539482 agitated_nobel[101521]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 00:10:33 np0005539482 systemd[1]: libpod-6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909.scope: Deactivated successfully.
Nov 29 00:10:33 np0005539482 podman[101504]: 2025-11-29 05:10:33.116552238 +0000 UTC m=+0.785679055 container died 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:33 np0005539482 systemd[1]: var-lib-containers-storage-overlay-838d98921549cf1341840f9c81150c12264c84cd026ba9271e96f71858be2a6f-merged.mount: Deactivated successfully.
Nov 29 00:10:33 np0005539482 podman[101504]: 2025-11-29 05:10:33.167020952 +0000 UTC m=+0.836147739 container remove 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:33 np0005539482 systemd[1]: libpod-conmon-6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909.scope: Deactivated successfully.
Nov 29 00:10:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v78: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 00:10:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 29 00:10:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 29 00:10:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 00:10:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 00:10:33 np0005539482 podman[101877]: 2025-11-29 05:10:33.915664189 +0000 UTC m=+0.085349241 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: daemon mds.cephfs.compute-0.mjtuko assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: Cluster is now healthy
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: daemon mds.cephfs.compute-0.mjtuko is now active in filesystem cephfs as rank 0
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e5 new map
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T05:10:17.381210+0000#012modified#0112025-11-29T05:10:34.000457+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.mjtuko{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 00:10:34 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 5 from mon.0
Nov 29 00:10:34 np0005539482 ceph-mds[101593]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 00:10:34 np0005539482 ceph-mds[101593]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 29 00:10:34 np0005539482 ceph-mds[101593]: mds.0.4 recovery_done -- successful recovery!
Nov 29 00:10:34 np0005539482 ceph-mds[101593]: mds.0.4 active_start
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] up:active
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mjtuko=up:active}
Nov 29 00:10:34 np0005539482 podman[101877]: 2025-11-29 05:10:34.057856334 +0000 UTC m=+0.227541416 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:34 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 32 pg[9.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:34 np0005539482 python3[101958]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:34 np0005539482 podman[101984]: 2025-11-29 05:10:34.475029306 +0000 UTC m=+0.060358739 container create bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:10:34 np0005539482 systemd[1]: Started libpod-conmon-bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263.scope.
Nov 29 00:10:34 np0005539482 podman[101984]: 2025-11-29 05:10:34.448462328 +0000 UTC m=+0.033791751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:34 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97a7c6f5a5beea04715e1dd0a774792754e8aec34440a9c1680d4b7738806da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97a7c6f5a5beea04715e1dd0a774792754e8aec34440a9c1680d4b7738806da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:34 np0005539482 podman[101984]: 2025-11-29 05:10:34.575067634 +0000 UTC m=+0.160397107 container init bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:34 np0005539482 podman[101984]: 2025-11-29 05:10:34.584067597 +0000 UTC m=+0.169397000 container start bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:10:34 np0005539482 podman[101984]: 2025-11-29 05:10:34.58756659 +0000 UTC m=+0.172896013 container attach bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 29 00:10:34 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:34 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev b70d627d-eff3-440d-826e-45927c56fcd3 does not exist
Nov 29 00:10:34 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a89dac84-5858-4e7b-b8f1-df5ab24619a4 does not exist
Nov 29 00:10:34 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 07731cc7-e030-48e0-96ed-d7fa05d11779 does not exist
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 00:10:35 np0005539482 hopeful_hofstadter[102023]: 
Nov 29 00:10:35 np0005539482 hopeful_hofstadter[102023]: [{"container_id": "8c3d78b49174", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.43%", "created": "2025-11-29T05:09:06.936213Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T05:09:06.990283Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913632Z", "memory_usage": 11597250, "ports": [], "service_name": "crash", "started": "2025-11-29T05:09:06.842550Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@crash.compute-0", "version": "18.2.7"}, {"container_id": "cd3cd449d854", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "9.35%", "created": "2025-11-29T05:10:32.781198Z", "daemon_id": "cephfs.compute-0.mjtuko", "daemon_name": "mds.cephfs.compute-0.mjtuko", "daemon_type": "mds", "events": ["2025-11-29T05:10:32.834335Z daemon:mds.cephfs.compute-0.mjtuko [INFO] \"Deployed mds.cephfs.compute-0.mjtuko on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.914379Z", "memory_usage": 18360565, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-29T05:10:32.685552Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@mds.cephfs.compute-0.mjtuko", "version": "18.2.7"}, {"container_id": "342af346b419", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "29.17%", "created": "2025-11-29T05:07:56.139663Z", "daemon_id": "compute-0.csskcz", "daemon_name": "mgr.compute-0.csskcz", "daemon_type": "mgr", "events": ["2025-11-29T05:09:11.542415Z daemon:mgr.compute-0.csskcz [INFO] \"Reconfigured mgr.compute-0.csskcz on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913497Z", "memory_usage": 548090675, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T05:07:56.051515Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.csskcz", "version": "18.2.7"}, {"container_id": "8221d7b65f9d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.26%", "created": "2025-11-29T05:07:51.261549Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T05:09:10.799324Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913216Z", "memory_request": 2147483648, "memory_usage": 42446356, "ports": [], "service_name": "mon", "started": "2025-11-29T05:07:53.908548Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@mon.compute-0", "version": "18.2.7"}, {"container_id": "a8f7d50ad538", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.73%", "created": "2025-11-29T05:09:36.470871Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-29T05:09:36.520527Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913763Z", "memory_request": 4294967296, "memory_usage": 60146319, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T05:09:36.322007Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@osd.0", "version": "18.2.7"}, {"container_id": "82f057625789", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.81%", "created": "2025-11-29T05:09:41.082980Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-29T05:09:41.168704Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913953Z", "memory_request": 4294967296, "memory_usage": 57378078, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T05:09:40.928970Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@osd.1", "version": "18.2.7"}, {"container_id": "5bc94574df1b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.01%", "created": "2025-11-29T05:09:46.298104Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-29T05:09:46.430954Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.914088Z", "memory_request": 4294967296, "memory_usage": 55993958, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T05:09:46.102905Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@osd.2", "version": "18.2.7"}, {"container_id": "bb930ede36ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.77%", "created": "2025-11-29T05:10:30.786152Z", "daemon_id": "rgw.compute-0.dwtrck", "daemon_name": "rgw.rgw.compute-0.dwtrck", "daemon_type": "rgw", "events": ["2025-11-29T
Nov 29 00:10:35 np0005539482 systemd[1]: libpod-bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263.scope: Deactivated successfully.
Nov 29 00:10:35 np0005539482 podman[101984]: 2025-11-29 05:10:35.123944324 +0000 UTC m=+0.709273717 container died bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:10:35 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c97a7c6f5a5beea04715e1dd0a774792754e8aec34440a9c1680d4b7738806da-merged.mount: Deactivated successfully.
Nov 29 00:10:35 np0005539482 podman[101984]: 2025-11-29 05:10:35.188501561 +0000 UTC m=+0.773830984 container remove bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:10:35 np0005539482 systemd[1]: libpod-conmon-bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263.scope: Deactivated successfully.
Nov 29 00:10:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v81: 9 pgs: 1 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 00:10:35 np0005539482 rsyslogd[1003]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "8c3d78b49174", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 00:10:35 np0005539482 podman[102255]: 2025-11-29 05:10:35.766427978 +0000 UTC m=+0.074175376 container create b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:10:35 np0005539482 systemd[1]: Started libpod-conmon-b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741.scope.
Nov 29 00:10:35 np0005539482 podman[102255]: 2025-11-29 05:10:35.734914272 +0000 UTC m=+0.042661710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:35 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 29 00:10:35 np0005539482 podman[102255]: 2025-11-29 05:10:35.868210236 +0000 UTC m=+0.175957674 container init b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 00:10:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 00:10:35 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:35 np0005539482 podman[102255]: 2025-11-29 05:10:35.877985148 +0000 UTC m=+0.185732506 container start b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:10:35 np0005539482 podman[102255]: 2025-11-29 05:10:35.881355218 +0000 UTC m=+0.189102676 container attach b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:10:35 np0005539482 distracted_moore[102271]: 167 167
Nov 29 00:10:35 np0005539482 systemd[1]: libpod-b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741.scope: Deactivated successfully.
Nov 29 00:10:35 np0005539482 podman[102255]: 2025-11-29 05:10:35.88440846 +0000 UTC m=+0.192155828 container died b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 00:10:35 np0005539482 systemd[1]: var-lib-containers-storage-overlay-82edb18f9b8fb982637aaa6322b97b44b0f477cc74fee865e087390c794c7fd7-merged.mount: Deactivated successfully.
Nov 29 00:10:35 np0005539482 podman[102255]: 2025-11-29 05:10:35.924756095 +0000 UTC m=+0.232503483 container remove b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:10:35 np0005539482 systemd[1]: libpod-conmon-b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741.scope: Deactivated successfully.
Nov 29 00:10:36 np0005539482 podman[102305]: 2025-11-29 05:10:36.106572757 +0000 UTC m=+0.051733674 container create 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:10:36 np0005539482 systemd[1]: Started libpod-conmon-79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93.scope.
Nov 29 00:10:36 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:36 np0005539482 podman[102305]: 2025-11-29 05:10:36.084909065 +0000 UTC m=+0.030069992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:36 np0005539482 podman[102305]: 2025-11-29 05:10:36.184742927 +0000 UTC m=+0.129903874 container init 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:36 np0005539482 podman[102305]: 2025-11-29 05:10:36.192678555 +0000 UTC m=+0.137839482 container start 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:10:36 np0005539482 podman[102305]: 2025-11-29 05:10:36.196423174 +0000 UTC m=+0.141584131 container attach 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:10:36 np0005539482 python3[102333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:36 np0005539482 podman[102345]: 2025-11-29 05:10:36.29172408 +0000 UTC m=+0.048133721 container create 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:10:36 np0005539482 systemd[1]: Started libpod-conmon-6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db.scope.
Nov 29 00:10:36 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b62a457bf4b5f8856b2937906175881f667d785a7ba325f942c29171444720/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b62a457bf4b5f8856b2937906175881f667d785a7ba325f942c29171444720/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:36 np0005539482 podman[102345]: 2025-11-29 05:10:36.272048584 +0000 UTC m=+0.028458255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:36 np0005539482 ceph-mgr[75473]: [progress INFO root] Writing back 5 completed events
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 00:10:36 np0005539482 podman[102345]: 2025-11-29 05:10:36.371726613 +0000 UTC m=+0.128136274 container init 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:36 np0005539482 podman[102345]: 2025-11-29 05:10:36.380707945 +0000 UTC m=+0.137117586 container start 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:36 np0005539482 podman[102345]: 2025-11-29 05:10:36.384352812 +0000 UTC m=+0.140762443 container attach 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 29 00:10:36 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 29 00:10:36 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807082650' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 00:10:37 np0005539482 awesome_jones[102361]: 
Nov 29 00:10:37 np0005539482 awesome_jones[102361]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":162,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1764392994,"num_in_osds":3,"osd_in_since":1764392965,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":8},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":27,"data_bytes":463028,"bytes_used":83898368,"bytes_avail":64328028160,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"read_bytes_sec":1023,"write_bytes_sec":4606,"read_op_per_sec":0,"write_op_per_sec":11},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.mjtuko","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T05:09:43.260960+0000","services":{}},"progress_events":{}}
Nov 29 00:10:37 np0005539482 systemd[1]: libpod-6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db.scope: Deactivated successfully.
Nov 29 00:10:37 np0005539482 podman[102345]: 2025-11-29 05:10:37.039767413 +0000 UTC m=+0.796177044 container died 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a5b62a457bf4b5f8856b2937906175881f667d785a7ba325f942c29171444720-merged.mount: Deactivated successfully.
Nov 29 00:10:37 np0005539482 podman[102345]: 2025-11-29 05:10:37.07770493 +0000 UTC m=+0.834114571 container remove 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:37 np0005539482 systemd[1]: libpod-conmon-6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db.scope: Deactivated successfully.
Nov 29 00:10:37 np0005539482 thirsty_almeida[102340]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:10:37 np0005539482 thirsty_almeida[102340]: --> relative data size: 1.0
Nov 29 00:10:37 np0005539482 thirsty_almeida[102340]: --> All data devices are unavailable
Nov 29 00:10:37 np0005539482 systemd[1]: libpod-79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93.scope: Deactivated successfully.
Nov 29 00:10:37 np0005539482 podman[102305]: 2025-11-29 05:10:37.228886728 +0000 UTC m=+1.174047635 container died 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:10:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b-merged.mount: Deactivated successfully.
Nov 29 00:10:37 np0005539482 podman[102305]: 2025-11-29 05:10:37.275919951 +0000 UTC m=+1.221080858 container remove 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v84: 10 pgs: 2 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 00:10:37 np0005539482 systemd[1]: libpod-conmon-79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93.scope: Deactivated successfully.
Nov 29 00:10:37 np0005539482 podman[102589]: 2025-11-29 05:10:37.811022445 +0000 UTC m=+0.039235400 container create 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:37 np0005539482 systemd[1]: Started libpod-conmon-51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19.scope.
Nov 29 00:10:37 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:37 np0005539482 podman[102589]: 2025-11-29 05:10:37.88096313 +0000 UTC m=+0.109176075 container init 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 00:10:37 np0005539482 podman[102589]: 2025-11-29 05:10:37.886945321 +0000 UTC m=+0.115158236 container start 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:37 np0005539482 podman[102589]: 2025-11-29 05:10:37.793126521 +0000 UTC m=+0.021339466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:37 np0005539482 podman[102589]: 2025-11-29 05:10:37.890849764 +0000 UTC m=+0.119062689 container attach 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:10:37 np0005539482 systemd[1]: libpod-51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19.scope: Deactivated successfully.
Nov 29 00:10:37 np0005539482 festive_jemison[102604]: 167 167
Nov 29 00:10:37 np0005539482 podman[102589]: 2025-11-29 05:10:37.89239425 +0000 UTC m=+0.120607255 container died 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 00:10:37 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 00:10:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 00:10:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e89c830c1aba67b304c22968853a9bc41c8e8b33ff2bc6113b4194b9bcbd441a-merged.mount: Deactivated successfully.
Nov 29 00:10:37 np0005539482 podman[102589]: 2025-11-29 05:10:37.937579189 +0000 UTC m=+0.165792134 container remove 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:10:37 np0005539482 systemd[1]: libpod-conmon-51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19.scope: Deactivated successfully.
Nov 29 00:10:38 np0005539482 podman[102655]: 2025-11-29 05:10:38.096685515 +0000 UTC m=+0.043107521 container create 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:38 np0005539482 systemd[1]: Started libpod-conmon-75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2.scope.
Nov 29 00:10:38 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:38 np0005539482 podman[102655]: 2025-11-29 05:10:38.07662326 +0000 UTC m=+0.023045326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:38 np0005539482 podman[102655]: 2025-11-29 05:10:38.186114471 +0000 UTC m=+0.132536587 container init 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 00:10:38 np0005539482 python3[102650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:38 np0005539482 podman[102655]: 2025-11-29 05:10:38.198247518 +0000 UTC m=+0.144669574 container start 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:38 np0005539482 podman[102655]: 2025-11-29 05:10:38.202677294 +0000 UTC m=+0.149099300 container attach 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:38 np0005539482 podman[102677]: 2025-11-29 05:10:38.272521916 +0000 UTC m=+0.052168956 container create 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:38 np0005539482 systemd[1]: Started libpod-conmon-5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf.scope.
Nov 29 00:10:38 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa094add065a17083e90a81c0edc43e2ae53e1c90c9f41f6e9f2d6c273e9f00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa094add065a17083e90a81c0edc43e2ae53e1c90c9f41f6e9f2d6c273e9f00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:38 np0005539482 podman[102677]: 2025-11-29 05:10:38.252656406 +0000 UTC m=+0.032303436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:38 np0005539482 podman[102677]: 2025-11-29 05:10:38.356284548 +0000 UTC m=+0.135931568 container init 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:10:38 np0005539482 podman[102677]: 2025-11-29 05:10:38.361817059 +0000 UTC m=+0.141464109 container start 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:38 np0005539482 podman[102677]: 2025-11-29 05:10:38.36567158 +0000 UTC m=+0.145318610 container attach 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 00:10:38 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 36 pg[11.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778170416' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 00:10:38 np0005539482 crazy_kalam[102692]: 
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 00:10:38 np0005539482 systemd[1]: libpod-5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf.scope: Deactivated successfully.
Nov 29 00:10:38 np0005539482 crazy_kalam[102692]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dwtrck","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 29 00:10:38 np0005539482 podman[102677]: 2025-11-29 05:10:38.897339773 +0000 UTC m=+0.676986833 container died 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 00:10:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 00:10:38 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:38 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1aa094add065a17083e90a81c0edc43e2ae53e1c90c9f41f6e9f2d6c273e9f00-merged.mount: Deactivated successfully.
Nov 29 00:10:38 np0005539482 podman[102677]: 2025-11-29 05:10:38.95216108 +0000 UTC m=+0.731808080 container remove 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:10:38 np0005539482 zen_poincare[102672]: {
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:    "0": [
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:        {
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "devices": [
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "/dev/loop3"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            ],
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_name": "ceph_lv0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_size": "21470642176",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "name": "ceph_lv0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "tags": {
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.crush_device_class": "",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.encrypted": "0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osd_id": "0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.type": "block",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.vdo": "0"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            },
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "type": "block",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "vg_name": "ceph_vg0"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:        }
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:    ],
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:    "1": [
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:        {
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "devices": [
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "/dev/loop4"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            ],
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_name": "ceph_lv1",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_size": "21470642176",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "name": "ceph_lv1",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "tags": {
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.crush_device_class": "",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.encrypted": "0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osd_id": "1",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.type": "block",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.vdo": "0"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            },
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "type": "block",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "vg_name": "ceph_vg1"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:        }
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:    ],
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:    "2": [
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:        {
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "devices": [
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "/dev/loop5"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            ],
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_name": "ceph_lv2",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_size": "21470642176",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "name": "ceph_lv2",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "tags": {
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.crush_device_class": "",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.encrypted": "0",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osd_id": "2",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.type": "block",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:                "ceph.vdo": "0"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            },
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "type": "block",
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:            "vg_name": "ceph_vg2"
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:        }
Nov 29 00:10:38 np0005539482 zen_poincare[102672]:    ]
Nov 29 00:10:38 np0005539482 zen_poincare[102672]: }
Nov 29 00:10:38 np0005539482 systemd[1]: libpod-conmon-5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf.scope: Deactivated successfully.
Nov 29 00:10:39 np0005539482 systemd[1]: libpod-75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2.scope: Deactivated successfully.
Nov 29 00:10:39 np0005539482 podman[102655]: 2025-11-29 05:10:39.001550899 +0000 UTC m=+0.947972935 container died 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:39 np0005539482 systemd[1]: var-lib-containers-storage-overlay-2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2-merged.mount: Deactivated successfully.
Nov 29 00:10:39 np0005539482 podman[102655]: 2025-11-29 05:10:39.052952695 +0000 UTC m=+0.999374701 container remove 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:39 np0005539482 systemd[1]: libpod-conmon-75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2.scope: Deactivated successfully.
Nov 29 00:10:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Nov 29 00:10:39 np0005539482 podman[102884]: 2025-11-29 05:10:39.787100629 +0000 UTC m=+0.065701386 container create 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:39 np0005539482 systemd[1]: Started libpod-conmon-8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db.scope.
Nov 29 00:10:39 np0005539482 podman[102884]: 2025-11-29 05:10:39.76095377 +0000 UTC m=+0.039554587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:39 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:39 np0005539482 podman[102884]: 2025-11-29 05:10:39.885043367 +0000 UTC m=+0.163644184 container init 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:39 np0005539482 podman[102884]: 2025-11-29 05:10:39.892447642 +0000 UTC m=+0.171048399 container start 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:39 np0005539482 podman[102884]: 2025-11-29 05:10:39.896779454 +0000 UTC m=+0.175380211 container attach 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:10:39 np0005539482 infallible_chatelet[102900]: 167 167
Nov 29 00:10:39 np0005539482 systemd[1]: libpod-8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db.scope: Deactivated successfully.
Nov 29 00:10:39 np0005539482 podman[102884]: 2025-11-29 05:10:39.898153977 +0000 UTC m=+0.176754744 container died 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:10:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 00:10:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 00:10:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 29 00:10:39 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 29 00:10:39 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 00:10:39 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 00:10:39 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7de168f7b92ba7dc707b051e8ab279e3ca2c3fd7cbe43018e920ccfcaa01744e-merged.mount: Deactivated successfully.
Nov 29 00:10:39 np0005539482 podman[102884]: 2025-11-29 05:10:39.955509984 +0000 UTC m=+0.234110711 container remove 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:10:39 np0005539482 systemd[1]: libpod-conmon-8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db.scope: Deactivated successfully.
Nov 29 00:10:40 np0005539482 radosgw[101131]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 00:10:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck[101127]: 2025-11-29T05:10:40.057+0000 7f8fe2607940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 00:10:40 np0005539482 radosgw[101131]: framework: beast
Nov 29 00:10:40 np0005539482 radosgw[101131]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 00:10:40 np0005539482 radosgw[101131]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 00:10:40 np0005539482 python3[102941]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:40 np0005539482 radosgw[101131]: starting handler: beast
Nov 29 00:10:40 np0005539482 radosgw[101131]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 00:10:40 np0005539482 podman[102976]: 2025-11-29 05:10:40.150334565 +0000 UTC m=+0.047894165 container create f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:10:40 np0005539482 radosgw[101131]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dwtrck,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f35e7436-e8c2-46d1-be58-9961c1fdcc6c,zone_name=default,zonegroup_id=467ce4d9-6945-496b-b23e-b9cf98f6161a,zonegroup_name=default}
Nov 29 00:10:40 np0005539482 podman[102982]: 2025-11-29 05:10:40.181081793 +0000 UTC m=+0.058890265 container create fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 00:10:40 np0005539482 systemd[1]: Started libpod-conmon-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope.
Nov 29 00:10:40 np0005539482 systemd[1]: Started libpod-conmon-fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292.scope.
Nov 29 00:10:40 np0005539482 podman[102976]: 2025-11-29 05:10:40.133437396 +0000 UTC m=+0.030997026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a065102c01d931ae2635bb9c0583c313eba74241ca3e497025e7cd5dcf6b60/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a065102c01d931ae2635bb9c0583c313eba74241ca3e497025e7cd5dcf6b60/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:40 np0005539482 podman[102982]: 2025-11-29 05:10:40.16024079 +0000 UTC m=+0.038049292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:40 np0005539482 podman[102976]: 2025-11-29 05:10:40.263135364 +0000 UTC m=+0.160694984 container init f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:10:40 np0005539482 podman[102976]: 2025-11-29 05:10:40.271732498 +0000 UTC m=+0.169292098 container start f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:40 np0005539482 podman[102982]: 2025-11-29 05:10:40.275991899 +0000 UTC m=+0.153800441 container init fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:40 np0005539482 podman[102982]: 2025-11-29 05:10:40.28406906 +0000 UTC m=+0.161877502 container start fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:10:40 np0005539482 podman[102976]: 2025-11-29 05:10:40.285316399 +0000 UTC m=+0.182876079 container attach f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:40 np0005539482 podman[102982]: 2025-11-29 05:10:40.290168224 +0000 UTC m=+0.167976676 container attach fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 00:10:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 00:10:40 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788946129' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 00:10:40 np0005539482 clever_black[103524]: mimic
Nov 29 00:10:40 np0005539482 systemd[1]: libpod-fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292.scope: Deactivated successfully.
Nov 29 00:10:40 np0005539482 podman[102982]: 2025-11-29 05:10:40.826312032 +0000 UTC m=+0.704120484 container died fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 00:10:40 np0005539482 systemd[1]: var-lib-containers-storage-overlay-37a065102c01d931ae2635bb9c0583c313eba74241ca3e497025e7cd5dcf6b60-merged.mount: Deactivated successfully.
Nov 29 00:10:40 np0005539482 podman[102982]: 2025-11-29 05:10:40.871187434 +0000 UTC m=+0.748995886 container remove fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 00:10:40 np0005539482 systemd[1]: libpod-conmon-fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292.scope: Deactivated successfully.
Nov 29 00:10:40 np0005539482 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 00:10:41 np0005539482 funny_swanson[103522]: {
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "osd_id": 0,
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "type": "bluestore"
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:    },
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "osd_id": 1,
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "type": "bluestore"
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:    },
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "osd_id": 2,
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:        "type": "bluestore"
Nov 29 00:10:41 np0005539482 funny_swanson[103522]:    }
Nov 29 00:10:41 np0005539482 funny_swanson[103522]: }
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:10:41
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.control', 'images', 'backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 232 B/s rd, 465 B/s wr, 1 op/s
Nov 29 00:10:41 np0005539482 systemd[1]: libpod-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope: Deactivated successfully.
Nov 29 00:10:41 np0005539482 systemd[1]: libpod-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope: Consumed 1.004s CPU time.
Nov 29 00:10:41 np0005539482 podman[103593]: 2025-11-29 05:10:41.332980622 +0000 UTC m=+0.027565722 container died f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:10:41 np0005539482 systemd[1]: var-lib-containers-storage-overlay-84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d-merged.mount: Deactivated successfully.
Nov 29 00:10:41 np0005539482 podman[103593]: 2025-11-29 05:10:41.396113446 +0000 UTC m=+0.090698546 container remove f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:10:41 np0005539482 systemd[1]: libpod-conmon-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope: Deactivated successfully.
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 789eabe4-50d0-4c54-9022-1207bfba532e does not exist
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e9aa2db9-645c-4bf9-9053-fd985b25d602 does not exist
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 29 00:10:41 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev b54068fd-06f2-486d-9164-d647b988f2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:41 np0005539482 python3[103731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:10:42 np0005539482 podman[103782]: 2025-11-29 05:10:42.033585533 +0000 UTC m=+0.059028009 container create 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 29 00:10:42 np0005539482 systemd[1]: Started libpod-conmon-59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88.scope.
Nov 29 00:10:42 np0005539482 podman[103782]: 2025-11-29 05:10:42.009661687 +0000 UTC m=+0.035104163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:10:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4219e571bd90d3100e6cda2037394426800c8acaf03ea7bc5b7eb27c4af5bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4219e571bd90d3100e6cda2037394426800c8acaf03ea7bc5b7eb27c4af5bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:42 np0005539482 podman[103782]: 2025-11-29 05:10:42.138027614 +0000 UTC m=+0.163470100 container init 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:42 np0005539482 podman[103782]: 2025-11-29 05:10:42.146135706 +0000 UTC m=+0.171578202 container start 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:10:42 np0005539482 podman[103782]: 2025-11-29 05:10:42.150868148 +0000 UTC m=+0.176310614 container attach 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:10:42 np0005539482 podman[103872]: 2025-11-29 05:10:42.506750341 +0000 UTC m=+0.093522055 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:10:42 np0005539482 podman[103872]: 2025-11-29 05:10:42.634712709 +0000 UTC m=+0.221484413 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779896496' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 00:10:42 np0005539482 interesting_antonelli[103801]: 
Nov 29 00:10:42 np0005539482 interesting_antonelli[103801]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 29 00:10:42 np0005539482 systemd[1]: libpod-59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88.scope: Deactivated successfully.
Nov 29 00:10:42 np0005539482 podman[103782]: 2025-11-29 05:10:42.77086595 +0000 UTC m=+0.796308416 container died 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:10:42 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9b4219e571bd90d3100e6cda2037394426800c8acaf03ea7bc5b7eb27c4af5bd-merged.mount: Deactivated successfully.
Nov 29 00:10:42 np0005539482 podman[103782]: 2025-11-29 05:10:42.813148671 +0000 UTC m=+0.838591137 container remove 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:10:42 np0005539482 systemd[1]: libpod-conmon-59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88.scope: Deactivated successfully.
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 29 00:10:42 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 09d9423c-3037-46fc-8fab-602c085244a8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 468 B/s wr, 1 op/s
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:43 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 92af31f6-6f04-4e62-831d-0412490461d8 does not exist
Nov 29 00:10:43 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 84ac48af-dc68-424f-af83-b342a0417e49 does not exist
Nov 29 00:10:43 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 4de3bebe-6145-407b-a805-ae61f4d5459d does not exist
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 29 00:10:43 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 60b47820-dec1-4bfb-aa36-6ff5c734a866 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:43 np0005539482 podman[104196]: 2025-11-29 05:10:43.991322813 +0000 UTC m=+0.056651482 container create bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:10:44 np0005539482 systemd[1]: Started libpod-conmon-bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8.scope.
Nov 29 00:10:44 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:44 np0005539482 podman[104196]: 2025-11-29 05:10:43.973786648 +0000 UTC m=+0.039115307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:44 np0005539482 podman[104196]: 2025-11-29 05:10:44.074616444 +0000 UTC m=+0.139945153 container init bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:10:44 np0005539482 podman[104196]: 2025-11-29 05:10:44.082146933 +0000 UTC m=+0.147475592 container start bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:10:44 np0005539482 ecstatic_visvesvaraya[104212]: 167 167
Nov 29 00:10:44 np0005539482 podman[104196]: 2025-11-29 05:10:44.086957216 +0000 UTC m=+0.152285905 container attach bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:10:44 np0005539482 systemd[1]: libpod-bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8.scope: Deactivated successfully.
Nov 29 00:10:44 np0005539482 podman[104196]: 2025-11-29 05:10:44.087993761 +0000 UTC m=+0.153322420 container died bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:10:44 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f9e66ff3f7586be1c0261798741531d110f7cf68e652c3c25822fd8ad2a58bde-merged.mount: Deactivated successfully.
Nov 29 00:10:44 np0005539482 podman[104196]: 2025-11-29 05:10:44.123674435 +0000 UTC m=+0.189003084 container remove bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:44 np0005539482 systemd[1]: libpod-conmon-bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8.scope: Deactivated successfully.
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:44 np0005539482 podman[104238]: 2025-11-29 05:10:44.320611116 +0000 UTC m=+0.047957216 container create b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 00:10:44 np0005539482 systemd[1]: Started libpod-conmon-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope.
Nov 29 00:10:44 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:44 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:44 np0005539482 podman[104238]: 2025-11-29 05:10:44.295917851 +0000 UTC m=+0.023263991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:44 np0005539482 podman[104238]: 2025-11-29 05:10:44.398473219 +0000 UTC m=+0.125819369 container init b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:44 np0005539482 podman[104238]: 2025-11-29 05:10:44.406459467 +0000 UTC m=+0.133805587 container start b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:10:44 np0005539482 podman[104238]: 2025-11-29 05:10:44.410155435 +0000 UTC m=+0.137501565 container attach b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 29 00:10:44 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 24cf8aab-88f0-41ac-9c03-50177383e1e1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v95: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 8.0 KiB/s wr, 394 op/s
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:45 np0005539482 angry_poitras[104255]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:10:45 np0005539482 angry_poitras[104255]: --> relative data size: 1.0
Nov 29 00:10:45 np0005539482 angry_poitras[104255]: --> All data devices are unavailable
Nov 29 00:10:45 np0005539482 systemd[1]: libpod-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope: Deactivated successfully.
Nov 29 00:10:45 np0005539482 systemd[1]: libpod-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope: Consumed 1.021s CPU time.
Nov 29 00:10:45 np0005539482 conmon[104255]: conmon b69b545600e3f0cf2cdc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope/container/memory.events
Nov 29 00:10:45 np0005539482 podman[104238]: 2025-11-29 05:10:45.478031547 +0000 UTC m=+1.205377647 container died b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:45 np0005539482 systemd[1]: var-lib-containers-storage-overlay-577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325-merged.mount: Deactivated successfully.
Nov 29 00:10:45 np0005539482 podman[104238]: 2025-11-29 05:10:45.551337461 +0000 UTC m=+1.278683601 container remove b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:10:45 np0005539482 systemd[1]: libpod-conmon-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope: Deactivated successfully.
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 00:10:45 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43 pruub=9.690950394s) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active pruub 77.746055603s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:45 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 40938a28-7d35-4d1d-acc7-268a3723f906 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:46 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43 pruub=9.690950394s) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown pruub 77.746055603s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=13.106151581s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active pruub 71.429924011s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=13.106102943s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 71.429954529s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=13.106151581s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown pruub 71.429924011s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=13.106102943s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 71.429954529s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:46 np0005539482 podman[104439]: 2025-11-29 05:10:46.268446692 +0000 UTC m=+0.041738269 container create b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:10:46 np0005539482 systemd[1]: Started libpod-conmon-b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f.scope.
Nov 29 00:10:46 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:46 np0005539482 podman[104439]: 2025-11-29 05:10:46.337303901 +0000 UTC m=+0.110595498 container init b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:10:46 np0005539482 podman[104439]: 2025-11-29 05:10:46.343240462 +0000 UTC m=+0.116532039 container start b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:10:46 np0005539482 podman[104439]: 2025-11-29 05:10:46.3465385 +0000 UTC m=+0.119830097 container attach b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:10:46 np0005539482 zen_swirles[104455]: 167 167
Nov 29 00:10:46 np0005539482 systemd[1]: libpod-b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f.scope: Deactivated successfully.
Nov 29 00:10:46 np0005539482 podman[104439]: 2025-11-29 05:10:46.347280628 +0000 UTC m=+0.120572215 container died b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:10:46 np0005539482 podman[104439]: 2025-11-29 05:10:46.252197058 +0000 UTC m=+0.025488655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:46 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c7680914fbf19876e14cd51468caae358c6f8f7a083ad79ca4c9780af84c8dca-merged.mount: Deactivated successfully.
Nov 29 00:10:46 np0005539482 ceph-mgr[75473]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 29 00:10:46 np0005539482 podman[104439]: 2025-11-29 05:10:46.379764486 +0000 UTC m=+0.153056073 container remove b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 00:10:46 np0005539482 systemd[1]: libpod-conmon-b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f.scope: Deactivated successfully.
Nov 29 00:10:46 np0005539482 podman[104481]: 2025-11-29 05:10:46.538472762 +0000 UTC m=+0.043844658 container create 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:10:46 np0005539482 systemd[1]: Started libpod-conmon-7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3.scope.
Nov 29 00:10:46 np0005539482 podman[104481]: 2025-11-29 05:10:46.517094076 +0000 UTC m=+0.022466022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:46 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:46 np0005539482 podman[104481]: 2025-11-29 05:10:46.643932917 +0000 UTC m=+0.149304913 container init 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:10:46 np0005539482 podman[104481]: 2025-11-29 05:10:46.655934502 +0000 UTC m=+0.161306428 container start 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:10:46 np0005539482 podman[104481]: 2025-11-29 05:10:46.659738182 +0000 UTC m=+0.165110108 container attach 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 00:10:47 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev d093d1d7-e900-4ac1-90ba-e4b9b7c58eeb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=41/44 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v98: 135 pgs: 124 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 8.0 KiB/s wr, 394 op/s
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]: {
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:    "0": [
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:        {
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "devices": [
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "/dev/loop3"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            ],
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_name": "ceph_lv0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_size": "21470642176",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "name": "ceph_lv0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "tags": {
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.crush_device_class": "",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.encrypted": "0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osd_id": "0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.type": "block",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.vdo": "0"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            },
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "type": "block",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "vg_name": "ceph_vg0"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:        }
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:    ],
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:    "1": [
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:        {
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "devices": [
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "/dev/loop4"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            ],
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_name": "ceph_lv1",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_size": "21470642176",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "name": "ceph_lv1",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "tags": {
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.crush_device_class": "",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.encrypted": "0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osd_id": "1",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.type": "block",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.vdo": "0"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            },
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "type": "block",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "vg_name": "ceph_vg1"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:        }
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:    ],
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:    "2": [
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:        {
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "devices": [
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "/dev/loop5"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            ],
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_name": "ceph_lv2",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_size": "21470642176",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "name": "ceph_lv2",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "tags": {
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.cluster_name": "ceph",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.crush_device_class": "",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.encrypted": "0",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osd_id": "2",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.type": "block",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:                "ceph.vdo": "0"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            },
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "type": "block",
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:            "vg_name": "ceph_vg2"
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:        }
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]:    ]
Nov 29 00:10:47 np0005539482 quirky_darwin[104498]: }
Nov 29 00:10:47 np0005539482 systemd[1]: libpod-7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3.scope: Deactivated successfully.
Nov 29 00:10:47 np0005539482 podman[104481]: 2025-11-29 05:10:47.43715957 +0000 UTC m=+0.942531476 container died 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:10:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1-merged.mount: Deactivated successfully.
Nov 29 00:10:47 np0005539482 systemd[76809]: Starting Mark boot as successful...
Nov 29 00:10:47 np0005539482 systemd[76809]: Finished Mark boot as successful.
Nov 29 00:10:47 np0005539482 podman[104481]: 2025-11-29 05:10:47.497039317 +0000 UTC m=+1.002411233 container remove 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 00:10:47 np0005539482 systemd[1]: libpod-conmon-7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3.scope: Deactivated successfully.
Nov 29 00:10:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 00:10:48 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 3d650303-1eb4-4605-9d73-51a2e3a81f60 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:48 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 45 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=19/20 n=22 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=11.249977112s) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 31'38 mlcod 31'38 active pruub 81.319190979s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:48 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 45 pg[6.0( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=11.249977112s) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 31'38 mlcod 0'0 unknown pruub 81.319190979s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 00:10:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=41 pruub=13.363991737s) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active pruub 78.893814087s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=41 pruub=13.363991737s) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown pruub 78.893814087s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=13.139037132s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 78.675048828s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=13.139037132s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 78.675048828s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:48 np0005539482 podman[104661]: 2025-11-29 05:10:48.188579013 +0000 UTC m=+0.060015882 container create 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:10:48 np0005539482 systemd[1]: Started libpod-conmon-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope.
Nov 29 00:10:48 np0005539482 podman[104661]: 2025-11-29 05:10:48.159502265 +0000 UTC m=+0.030939194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:48 np0005539482 podman[104661]: 2025-11-29 05:10:48.273633035 +0000 UTC m=+0.145069914 container init 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:10:48 np0005539482 podman[104661]: 2025-11-29 05:10:48.281811289 +0000 UTC m=+0.153248148 container start 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 00:10:48 np0005539482 podman[104661]: 2025-11-29 05:10:48.285637219 +0000 UTC m=+0.157074118 container attach 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:10:48 np0005539482 cranky_neumann[104677]: 167 167
Nov 29 00:10:48 np0005539482 systemd[1]: libpod-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope: Deactivated successfully.
Nov 29 00:10:48 np0005539482 conmon[104677]: conmon 7578d87071be9d4c39e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope/container/memory.events
Nov 29 00:10:48 np0005539482 podman[104661]: 2025-11-29 05:10:48.288226111 +0000 UTC m=+0.159662990 container died 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 00:10:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7033aef4372ca7c05096fc654e74b3470927c8819ec5485178867281cd8ab029-merged.mount: Deactivated successfully.
Nov 29 00:10:48 np0005539482 podman[104661]: 2025-11-29 05:10:48.326513267 +0000 UTC m=+0.197950136 container remove 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:48 np0005539482 systemd[1]: libpod-conmon-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope: Deactivated successfully.
Nov 29 00:10:48 np0005539482 podman[104702]: 2025-11-29 05:10:48.495922626 +0000 UTC m=+0.041487953 container create 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:10:48 np0005539482 systemd[1]: Started libpod-conmon-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope.
Nov 29 00:10:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:10:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:10:48 np0005539482 podman[104702]: 2025-11-29 05:10:48.476138358 +0000 UTC m=+0.021703735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:10:48 np0005539482 podman[104702]: 2025-11-29 05:10:48.580291242 +0000 UTC m=+0.125856599 container init 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:10:48 np0005539482 podman[104702]: 2025-11-29 05:10:48.587095663 +0000 UTC m=+0.132661020 container start 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:10:48 np0005539482 podman[104702]: 2025-11-29 05:10:48.593535406 +0000 UTC m=+0.139100753 container attach 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:10:48 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 00:10:48 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 00:10:48 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 29 00:10:48 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 00:10:49 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev d8761330-0c02-45a4-a3c5-bfa81a62a4af (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.a( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.5( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.9( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.4( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.8( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.7( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.2( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.f( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.e( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.c( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.d( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 31'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=41/46 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v101: 181 pgs: 108 unknown, 73 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]: {
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "osd_id": 0,
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "type": "bluestore"
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:    },
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "osd_id": 1,
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "type": "bluestore"
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:    },
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "osd_id": 2,
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:        "type": "bluestore"
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]:    }
Nov 29 00:10:49 np0005539482 awesome_archimedes[104719]: }
Nov 29 00:10:49 np0005539482 systemd[1]: libpod-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope: Deactivated successfully.
Nov 29 00:10:49 np0005539482 podman[104702]: 2025-11-29 05:10:49.604383928 +0000 UTC m=+1.149949295 container died 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:10:49 np0005539482 systemd[1]: libpod-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope: Consumed 1.015s CPU time.
Nov 29 00:10:49 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5-merged.mount: Deactivated successfully.
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 29 00:10:49 np0005539482 podman[104702]: 2025-11-29 05:10:49.680750145 +0000 UTC m=+1.226315482 container remove 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:10:49 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 29 00:10:49 np0005539482 systemd[1]: libpod-conmon-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope: Deactivated successfully.
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:10:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 8f0cd8a4-94f1-45b6-8750-2ddf16853e64 does not exist
Nov 29 00:10:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev cb8f6dad-bfed-4058-8b76-ab751d70ae38 does not exist
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 29 00:10:49 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 00:10:50 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev 563f6d09-9437-473a-957b-26b842c824c9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[8.0( v 31'4 (0'0,31'4] local-lis/les=30/31 n=4 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=14.824311256s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 31'3 active pruub 82.229804993s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[9.0( v 38'583 (0'0,38'583] local-lis/les=32/33 n=209 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=8.842782021s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 38'582 mlcod 38'582 active pruub 76.248367310s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[8.0( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=14.824311256s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 0'0 unknown pruub 82.229804993s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[9.0( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=8.842782021s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 38'582 mlcod 0'0 unknown pruub 76.248367310s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 00:10:50 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 29 00:10:50 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] update: starting ev fa9a4006-7c10-453c-b204-8f1af395116a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev b54068fd-06f2-486d-9164-d647b988f2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event b54068fd-06f2-486d-9164-d647b988f2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 09d9423c-3037-46fc-8fab-602c085244a8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.15( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.14( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.14( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.15( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.17( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 09d9423c-3037-46fc-8fab-602c085244a8 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.16( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 60b47820-dec1-4bfb-aa36-6ff5c734a866 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 60b47820-dec1-4bfb-aa36-6ff5c734a866 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 24cf8aab-88f0-41ac-9c03-50177383e1e1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 24cf8aab-88f0-41ac-9c03-50177383e1e1 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 40938a28-7d35-4d1d-acc7-268a3723f906 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 40938a28-7d35-4d1d-acc7-268a3723f906 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev d093d1d7-e900-4ac1-90ba-e4b9b7c58eeb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event d093d1d7-e900-4ac1-90ba-e4b9b7c58eeb (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 3d650303-1eb4-4605-9d73-51a2e3a81f60 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 3d650303-1eb4-4605-9d73-51a2e3a81f60 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev d8761330-0c02-45a4-a3c5-bfa81a62a4af (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event d8761330-0c02-45a4-a3c5-bfa81a62a4af (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev 563f6d09-9437-473a-957b-26b842c824c9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 563f6d09-9437-473a-957b-26b842c824c9 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] complete: finished ev fa9a4006-7c10-453c-b204-8f1af395116a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event fa9a4006-7c10-453c-b204-8f1af395116a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.10( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.17( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.16( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.11( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.10( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.13( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.12( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.11( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.13( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.12( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.d( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.c( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.c( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.f( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.8( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.d( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.9( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.a( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.b( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.3( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.2( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1( v 31'4 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.e( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.a( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.8( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.9( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.2( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.3( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.6( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.7( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.6( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.7( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.4( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.5( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.5( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.4( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1a( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1b( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.19( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.18( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1a( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.18( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1e( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1f( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1d( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.19( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1c( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1d( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1c( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.14( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.16( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.17( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.10( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.13( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.12( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.8( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.0( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.0( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 38'582 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.2( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.a( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.3( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.7( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.5( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.4( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.19( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1a( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v104: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:51 np0005539482 ceph-mgr[75473]: [progress INFO root] Writing back 15 completed events
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 00:10:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 00:10:52 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 00:10:52 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.828987122s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active pruub 80.300170898s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:52 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.828987122s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown pruub 80.300170898s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:52 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 00:10:52 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 00:10:52 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 29 00:10:52 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 29 00:10:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 00:10:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 00:10:53 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 00:10:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v107: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:53 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 29 00:10:53 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 29 00:10:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 49 pg[10.0( v 35'16 (0'0,35'16] local-lis/les=34/35 n=8 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.534220695s) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 35'15 active pruub 81.059791565s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.0( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.534220695s) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 0'0 unknown pruub 81.059791565s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.7( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.8( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.9( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.a( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.b( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.c( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.2( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.3( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.4( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.5( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.6( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.d( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.f( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.10( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.e( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.11( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.12( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.13( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.14( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.15( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.16( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.17( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.18( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.19( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1a( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1b( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1c( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1d( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1e( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1f( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:10:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 00:10:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 00:10:55 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1d( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1c( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.18( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.3( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.5( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.0( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.c( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.d( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.15( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.14( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.9( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:10:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:55 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 29 00:10:55 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 29 00:10:55 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 29 00:10:55 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 29 00:10:56 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 29 00:10:56 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 29 00:10:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:10:57 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 00:10:57 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 29 00:10:58 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 29 00:10:58 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 29 00:10:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:10:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:11:00 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 29 00:11:00 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 29 00:11:00 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 29 00:11:00 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 29 00:11:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:11:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:01 np0005539482 ceph-mgr[75473]: [progress INFO root] Completed event 8a19af1e-d04e-4eb0-90ab-4fa888746f41 (Global Recovery Event) in 15 seconds
Nov 29 00:11:01 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 29 00:11:01 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 29 00:11:01 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 00:11:01 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 00:11:02 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806700706s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090400696s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.800364494s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084068298s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.800285339s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084022522s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806643486s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090400696s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.800265312s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084068298s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799758911s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084022522s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799633026s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084075928s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806384087s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090850830s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799608231s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084075928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806331635s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090850830s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799503326s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084098816s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799499512s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084091187s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799484253s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084098816s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799433708s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084091187s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805849075s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090591431s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799783707s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084541321s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805829048s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090591431s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799480438s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084320068s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799401283s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084320068s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799250603s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084259033s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799760818s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084541321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799201965s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084259033s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799141884s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084251404s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799116135s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084251404s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806241035s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.091476440s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805368423s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090606689s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799041748s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084297180s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805338860s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090606689s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799015999s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084297180s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799060822s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084350586s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806206703s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.091476440s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799014091s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084350586s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805151939s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090728760s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798839569s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084434509s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798835754s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084472656s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798818588s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084434509s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805104256s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090728760s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798794746s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084472656s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805103302s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090866089s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.804840088s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090591431s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805082321s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090866089s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.804779053s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090591431s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798666000s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084533691s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798616409s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084526062s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798596382s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084526062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798618317s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084533691s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798488617s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084510803s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798519135s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084556580s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798500061s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084556580s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798476219s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084548950s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798441887s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084510803s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798392296s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084541321s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798430443s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084548950s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798376083s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084541321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798214912s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084579468s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798124313s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084579468s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908418655s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.312683105s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788142204s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192420959s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908383369s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.312683105s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788107872s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192420959s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.789016724s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193420410s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912281036s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316673279s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788949013s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193420410s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912191391s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316673279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.781491280s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.186050415s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912093163s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316673279s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.788057327s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192665100s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912058830s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316673279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787798882s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192436218s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.781434059s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.186050415s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.788005829s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192665100s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787751198s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192436218s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912031174s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316795349s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912009239s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316795349s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787770271s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192581177s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787744522s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192581177s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787699699s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192657471s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788142204s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193099976s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788173676s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193176270s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787672043s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192657471s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788096428s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193099976s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788153648s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193176270s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788089752s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193099976s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788050652s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193099976s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787918091s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193084717s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787858009s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193092346s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787856102s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193084717s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911723137s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316970825s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787837982s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193092346s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911678314s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316970825s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787822723s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193183899s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911528587s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316932678s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911496162s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316932678s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787911415s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193290710s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787763596s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193183899s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787771225s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193290710s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787641525s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193283081s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787619591s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193283081s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787540436s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193290710s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787517548s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193290710s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911201477s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317001343s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911181450s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317001343s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911158562s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317001343s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911150932s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317001343s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787746429s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193634033s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787750244s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193672180s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787727356s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193634033s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787703514s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193672180s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911005020s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317024231s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910983086s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317024231s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787546158s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193656921s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787524223s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193656921s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788177490s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194313049s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787768364s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193992615s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788104057s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194313049s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787747383s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193992615s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910818100s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317085266s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910787582s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317047119s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910778046s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317085266s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910731316s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317047119s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787279129s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194007874s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787255287s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194007874s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787128448s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194007874s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787140846s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194061279s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787103653s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194007874s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787117958s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194061279s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787117004s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194061279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787326813s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194381714s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787071228s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194061279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909996033s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317077637s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787307739s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194381714s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909976959s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317077637s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787222862s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194442749s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787198067s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194442749s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909942627s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317192078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786861420s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194168091s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786796570s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194129944s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909870148s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317192078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786819458s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194198608s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786805153s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194168091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786797523s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194198608s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786253929s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194152832s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786315918s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194267273s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786208153s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194152832s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786292076s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194267273s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909038544s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317123413s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786027908s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194129944s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909008026s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317123413s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786204338s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194374084s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908976555s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317146301s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786079407s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194305420s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786164284s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194374084s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786061287s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194305420s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908883095s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317146301s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908768654s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317115784s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.17( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908626556s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317115784s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908273697s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317153931s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.790143967s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.199142456s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908158302s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317161560s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908170700s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317153931s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785473824s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194641113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785130501s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194549561s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907771111s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317192078s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785049438s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194549561s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907659531s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317192078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785076141s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194725037s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785057068s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194725037s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.789924622s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.199142456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907494545s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317161560s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785320282s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194641113s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907355309s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317176819s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788032532s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.197891235s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788012505s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.197891235s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907303810s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317176819s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.784483910s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194801331s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906848907s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317192078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787620544s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.197975159s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906826973s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317192078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787576675s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.197975159s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.783991814s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194801331s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906202316s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317207336s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906173706s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317207336s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786719322s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.197891235s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786702156s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.197891235s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787779808s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.199142456s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787741661s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.199142456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.d( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857205391s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.486763000s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857173920s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.486763000s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.788203239s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417816162s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.788167953s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417816162s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.788183212s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417869568s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.788143158s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417869568s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787726402s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417816162s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.793228149s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423332214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787698746s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417816162s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.793208122s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423332214s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.787457466s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417800903s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.787395477s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417800903s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.860441208s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.490974426s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.860420227s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.490974426s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792739868s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423271179s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787227631s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417892456s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792643547s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423271179s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792794228s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423484802s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787177086s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417892456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792774200s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423484802s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.783559799s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414535522s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.783502579s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414535522s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.783287048s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414505005s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.783268929s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414505005s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.860280991s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491096497s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791881561s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423355103s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791820526s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423355103s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791843414s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423469543s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.859481812s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491096497s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791821480s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423469543s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791752815s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423721313s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.1e( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791732788s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423721313s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.859150887s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491172791s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858891487s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.490982056s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.859103203s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491172791s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858875275s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.490982056s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.785293579s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417884827s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.785241127s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417884827s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790006638s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.422691345s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789958000s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.422691345s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790723801s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423789978s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790676117s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423789978s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790492058s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423561096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790179253s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423561096s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780909538s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414421082s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858075142s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491607666s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858024597s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491607666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780848503s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414421082s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780668259s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414497375s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780625343s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414497375s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857177734s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491127014s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780667305s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414680481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857131004s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491127014s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780526161s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414421082s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780619621s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414680481s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780302048s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414421082s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789671898s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423812866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789633751s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423812866s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789542198s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423782349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856815338s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491149902s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856791496s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491149902s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789495468s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423782349s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.794953346s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429428101s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.794935226s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429428101s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779895782s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414398193s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779864311s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414398193s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779850960s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414413452s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856511116s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491149902s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779797554s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414413452s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779530525s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414245605s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856465340s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491149902s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779503822s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414245605s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779339790s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414222717s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788940430s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423843384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779294968s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414222717s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789008141s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423973083s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788898468s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423843384s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788973808s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423973083s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.855963707s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491157532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.7( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.1d( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.855909348s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491157532s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.777730942s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414184570s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.777702332s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414184570s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786898613s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423866272s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786866188s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423866272s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.853936195s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491325378s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.776669502s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414184570s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.853899002s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491325378s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.776618958s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414184570s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786219597s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.424095154s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.776266098s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414169312s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786177635s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.424095154s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.776092529s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414169312s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.776009560s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414199829s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.775964737s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414199829s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775331497s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413932800s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775288582s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413932800s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.775119781s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413925171s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775060654s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413917542s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.775067329s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413925171s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775020599s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413917542s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852431297s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491348267s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852385521s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491348267s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.18( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.774798393s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413917542s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790207863s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.429458618s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852411270s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491279602s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790445328s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429710388s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790128708s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429458618s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.774775505s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413917542s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790320396s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429710388s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852021217s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491462708s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851974487s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491462708s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789714813s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429512024s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773963928s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413764954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789690971s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429512024s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773869514s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413742065s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773899078s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413764954s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773812294s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413742065s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.7( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789587021s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429718018s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789566040s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429718018s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851199150s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491607666s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851178169s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491607666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851904869s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491279602s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772701263s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413314819s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772662163s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413314819s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.850867271s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491615295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789785385s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430534363s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789736748s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430534363s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788850784s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429687500s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.772235870s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413154602s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.772204399s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413154602s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.850845337s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491615295s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772057533s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413116455s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772015572s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413116455s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788815498s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429687500s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.771298409s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413116455s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.849765778s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491615295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.771262169s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413116455s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.1f( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.849740982s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491615295s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788393974s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430351257s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789003372s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430984497s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.771003723s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413032532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788945198s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430984497s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788334846s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430351257s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770841599s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413032532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770813942s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413032532s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770620346s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413024902s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788051605s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430656433s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.770444870s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413032532s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788022995s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430656433s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770407677s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413024902s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.4( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.848637581s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491645813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.848608971s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491645813s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.787676811s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430938721s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.787578583s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430938721s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786602020s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430664062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.5( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786562920s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430664062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786143303s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430664062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786389351s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430931091s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.768260002s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412811279s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786114693s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430664062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786360741s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430931091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.768213272s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412811279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.767839432s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412712097s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.767819405s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412712097s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846790314s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491699219s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846781731s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491722107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846744537s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491699219s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846710205s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491722107s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.3( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.766963005s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412788391s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.6( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.766901970s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412788391s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792809486s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.439002991s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792767525s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.439002991s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784534454s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430953979s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784504890s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430953979s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784186363s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.431022644s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784163475s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.431022644s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844812393s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491706848s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844770432s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491706848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844600677s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491706848s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.783916473s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.431121826s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844511986s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491706848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.6( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.783894539s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.431121826s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.764976501s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412239075s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.764951706s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412239075s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770415306s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417892456s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770350456s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417892456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844104767s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491714478s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844085693s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491714478s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791116714s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.438949585s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.764348984s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412170410s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791099548s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.438949585s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.764307022s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412170410s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.754925728s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.403030396s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.843610764s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491744995s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.754908562s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.403030396s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790827751s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.438980103s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.843582153s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491744995s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790781975s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.438980103s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.763880730s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412307739s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.763862610s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412307739s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790631294s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.439140320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790570259s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.439140320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.8( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.3( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.e( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.a( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.1b( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.11( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.1( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.a( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.16( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.9( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.c( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.f( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.12( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.15( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:02 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.17( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=52/53 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=52/53 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.1b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.d( v 51'17 lc 35'9 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.14( v 51'17 lc 35'13 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.e( v 51'17 lc 35'7 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.15( v 51'17 lc 35'5 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.9( v 51'17 lc 35'15 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.f( v 35'39 lc 31'1 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.5( v 35'39 lc 31'11 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.7( v 35'39 lc 31'21 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.d( v 35'39 lc 31'13 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 00:11:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 29 00:11:03 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 00:11:04 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.365850449s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.083297729s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.365795135s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.083297729s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372951508s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090682983s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372912407s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090682983s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372849464s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090759277s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372788429s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090705872s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372824669s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:04 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372759819s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090705872s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:04 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 00:11:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 00:11:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 00:11:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 00:11:05 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.389012337s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.040473938s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.388888359s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.040473938s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.395154953s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047615051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.387639999s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.040473938s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.387569427s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.040473938s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394632339s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047538757s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394430161s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047538757s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394090652s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047500610s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394824028s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047615051s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.393635750s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047500610s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=54/55 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=54/55 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.e( v 35'39 lc 31'19 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:05 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:05 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 545 B/s, 2 keys/s, 4 objects/s recovering
Nov 29 00:11:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 00:11:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 00:11:06 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376647949s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048171997s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376476288s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048027039s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376409531s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048027039s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376550674s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048171997s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376393318s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048110962s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376317978s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048110962s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375818253s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047706604s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375753403s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047706604s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375701904s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047813416s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375654221s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047813416s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375581741s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047927856s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375538826s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047912598s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375473976s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047912598s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375487328s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047927856s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375023842s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047935486s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374944687s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047935486s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374961853s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048049927s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374711990s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047805786s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374919891s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048049927s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374621391s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047805786s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.372932434s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047706604s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:06 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.372808456s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047706604s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=55/56 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:06 np0005539482 ceph-mgr[75473]: [progress INFO root] Writing back 16 completed events
Nov 29 00:11:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 00:11:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 29 00:11:06 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 29 00:11:06 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 29 00:11:06 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 29 00:11:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 00:11:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 537 B/s, 2 keys/s, 4 objects/s recovering
Nov 29 00:11:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 00:11:07 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 29 00:11:07 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 29 00:11:07 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Nov 29 00:11:07 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Nov 29 00:11:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 433 B/s, 1 keys/s, 4 objects/s recovering
Nov 29 00:11:09 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 29 00:11:09 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 29 00:11:09 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 00:11:09 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 00:11:10 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 00:11:10 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 00:11:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 372 B/s, 17 objects/s recovering
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 00:11:11 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 00:11:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:11:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:11:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:11:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:11:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:11:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599431038s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.636581421s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599365234s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.636581421s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599466324s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.636764526s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599435806s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.636764526s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:11 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599071503s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.636917114s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599032402s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.637062073s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.598900795s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.637062073s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:11 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.598788261s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.636917114s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:11 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:11 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:11 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 00:11:12 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 00:11:12 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 00:11:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 00:11:12 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 00:11:12 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:12 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=58/59 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:12 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.f( v 35'39 lc 31'1 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:12 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.7( v 35'39 lc 31'21 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:12 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 00:11:12 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 00:11:12 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Nov 29 00:11:12 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Nov 29 00:11:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 373 B/s, 17 objects/s recovering
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 00:11:13 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 00:11:13 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258614540s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 111.090835571s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:13 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258539200s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.090835571s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:13 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258361816s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 111.091163635s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:13 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258294106s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.091163635s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:13 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:13 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 00:11:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 00:11:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 00:11:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 00:11:14 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 00:11:14 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 61 pg[6.4( v 35'39 lc 31'15 (0'0,35'39] local-lis/les=60/61 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:14 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 61 pg[6.c( v 35'39 lc 31'17 (0'0,35'39] local-lis/les=60/61 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:14 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 00:11:14 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 00:11:14 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 29 00:11:14 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 29 00:11:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Nov 29 00:11:15 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 29 00:11:15 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 29 00:11:16 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 29 00:11:16 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 29 00:11:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 00:11:17 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 29 00:11:17 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 317 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 00:11:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 00:11:19 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.738649368s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 35'39 active pruub 112.637779236s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:19 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.738382339s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 112.637779236s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:19 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.737648964s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 35'39 active pruub 112.637100220s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:19 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.737483025s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 112.637100220s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:19 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:19 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:19 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 00:11:19 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 00:11:19 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 29 00:11:19 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 29 00:11:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 00:11:20 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 00:11:20 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 00:11:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 00:11:20 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 00:11:20 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 63 pg[6.5( v 35'39 lc 31'11 (0'0,35'39] local-lis/les=62/63 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:20 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 63 pg[6.d( v 35'39 lc 31'13 (0'0,35'39] local-lis/les=62/63 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:20 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 29 00:11:20 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 29 00:11:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 318 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 00:11:21 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 29 00:11:21 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 29 00:11:22 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 29 00:11:22 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 29 00:11:22 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 29 00:11:22 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 29 00:11:22 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 29 00:11:22 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 29 00:11:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 195 B/s, 0 objects/s recovering
Nov 29 00:11:23 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 29 00:11:23 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 29 00:11:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:24 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 29 00:11:24 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 29 00:11:24 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 29 00:11:24 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 29 00:11:24 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 29 00:11:24 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 29 00:11:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 215 B/s, 1 objects/s recovering
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 00:11:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 00:11:25 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 29 00:11:25 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 29 00:11:26 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 00:11:26 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 00:11:26 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Nov 29 00:11:26 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Nov 29 00:11:26 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 29 00:11:26 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 29 00:11:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.584018707s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.424026489s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.583935738s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.424026489s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589673996s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.430610657s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589609146s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.430610657s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589757919s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.431015015s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589647293s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.431015015s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589888573s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.431617737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589848518s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.431617737s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713256836s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 active pruub 121.377464294s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713195801s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 121.377464294s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713134766s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 active pruub 121.377655029s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713078499s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 121.377655029s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 00:11:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.712237358s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 active pruub 121.377822876s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.711883545s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 121.377822876s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=10.695466995s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=38'583 mlcod 0'0 active pruub 120.362739563s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=10.695433617s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 120.362739563s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=65) [2] r=0 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 29 00:11:27 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 29 00:11:27 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 29 00:11:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 00:11:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 00:11:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 00:11:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 00:11:28 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=0 lpr=66 pi=[55,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=0 lpr=66 pi=[55,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:28 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:28 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:28 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:28 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 00:11:28 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 00:11:28 np0005539482 python3[104842]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:11:29 np0005539482 podman[104843]: 2025-11-29 05:11:29.018808422 +0000 UTC m=+0.045707453 container create 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:11:29 np0005539482 systemd[1]: Started libpod-conmon-7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68.scope.
Nov 29 00:11:29 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:29 np0005539482 podman[104843]: 2025-11-29 05:11:28.996506354 +0000 UTC m=+0.023405385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:11:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b547f92855f69286cbaa4f4905258e7d22f90bf0dc82328602bfafd191af287/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b547f92855f69286cbaa4f4905258e7d22f90bf0dc82328602bfafd191af287/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:29 np0005539482 podman[104843]: 2025-11-29 05:11:29.112071669 +0000 UTC m=+0.138970760 container init 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:11:29 np0005539482 podman[104843]: 2025-11-29 05:11:29.120092838 +0000 UTC m=+0.146991859 container start 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:11:29 np0005539482 podman[104843]: 2025-11-29 05:11:29.123661383 +0000 UTC m=+0.150560464 container attach 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.469320297s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.020950317s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.469220161s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.020973206s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.469173431s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.020950317s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.466773033s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.018760681s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.466724396s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.018760681s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.468585014s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.020935059s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.468539238s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.020935059s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.468110085s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.020973206s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:29 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:29 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[55,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:29 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:29 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 00:11:29 np0005539482 objective_gould[104858]: could not fetch user info: no user info saved
Nov 29 00:11:29 np0005539482 systemd[1]: libpod-7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68.scope: Deactivated successfully.
Nov 29 00:11:29 np0005539482 podman[104843]: 2025-11-29 05:11:29.391675906 +0000 UTC m=+0.418574947 container died 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:11:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6b547f92855f69286cbaa4f4905258e7d22f90bf0dc82328602bfafd191af287-merged.mount: Deactivated successfully.
Nov 29 00:11:29 np0005539482 podman[104843]: 2025-11-29 05:11:29.442613012 +0000 UTC m=+0.469512053 container remove 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:11:29 np0005539482 systemd[1]: libpod-conmon-7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68.scope: Deactivated successfully.
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 00:11:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 00:11:29 np0005539482 python3[104981]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 29 00:11:29 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 29 00:11:29 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 29 00:11:29 np0005539482 podman[104982]: 2025-11-29 05:11:29.892253302 +0000 UTC m=+0.048648952 container create 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:11:29 np0005539482 systemd[1]: Started libpod-conmon-330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46.scope.
Nov 29 00:11:29 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:29 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 29 00:11:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1677cda9adcb3b14b68600db95b7a3eb91e7ff7d918e599800dde1ea9238dd68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1677cda9adcb3b14b68600db95b7a3eb91e7ff7d918e599800dde1ea9238dd68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:29 np0005539482 podman[104982]: 2025-11-29 05:11:29.875410484 +0000 UTC m=+0.031806124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 00:11:29 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 29 00:11:29 np0005539482 podman[104982]: 2025-11-29 05:11:29.979570159 +0000 UTC m=+0.135965799 container init 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:11:29 np0005539482 podman[104982]: 2025-11-29 05:11:29.985998741 +0000 UTC m=+0.142394361 container start 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:11:29 np0005539482 podman[104982]: 2025-11-29 05:11:29.988671864 +0000 UTC m=+0.145067484 container attach 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]: {
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "user_id": "openstack",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "display_name": "openstack",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "email": "",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "suspended": 0,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "max_buckets": 1000,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "subusers": [],
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "keys": [
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        {
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:            "user": "openstack",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:            "access_key": "BVHCHSDCJ5LYYWQFI2Q3",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:            "secret_key": "5v911KYTEXlGLdbwGYEKOjV4DFSvchdMwWFkshhZ"
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        }
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    ],
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "swift_keys": [],
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "caps": [],
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "op_mask": "read, write, delete",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "default_placement": "",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "default_storage_class": "",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "placement_tags": [],
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "bucket_quota": {
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "enabled": false,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "check_on_raw": false,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "max_size": -1,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "max_size_kb": 0,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "max_objects": -1
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    },
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "user_quota": {
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "enabled": false,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "check_on_raw": false,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "max_size": -1,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "max_size_kb": 0,
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:        "max_objects": -1
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    },
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "temp_url_keys": [],
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "type": "rgw",
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]:    "mfa_ids": []
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]: }
Nov 29 00:11:30 np0005539482 frosty_elgamal[104998]: 
Nov 29 00:11:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 00:11:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 00:11:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 00:11:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 00:11:30 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 00:11:30 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875186920s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.430702209s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875101089s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.430702209s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:30 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875391006s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.431510925s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875341415s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.431510925s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68) [2] r=0 lpr=68 pi=[55,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68) [2] r=0 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68 pruub=15.000617981s) [2] async=[2] r=-1 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 38'583 active pruub 127.244735718s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000761986s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 38'583 active pruub 127.244918823s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68 pruub=15.000556946s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.244735718s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000699043s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.244918823s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000412941s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 38'583 active pruub 127.245010376s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000065804s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 38'583 active pruub 127.244720459s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000247955s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.245010376s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=14.999962807s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.244720459s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68 pruub=14.846203804s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 127.091217041s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68 pruub=14.846153259s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.091217041s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=67/68 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:30 np0005539482 systemd[1]: libpod-330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46.scope: Deactivated successfully.
Nov 29 00:11:30 np0005539482 podman[104982]: 2025-11-29 05:11:30.206047488 +0000 UTC m=+0.362443158 container died 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:11:30 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1677cda9adcb3b14b68600db95b7a3eb91e7ff7d918e599800dde1ea9238dd68-merged.mount: Deactivated successfully.
Nov 29 00:11:30 np0005539482 podman[104982]: 2025-11-29 05:11:30.249738112 +0000 UTC m=+0.406133762 container remove 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:11:30 np0005539482 systemd[1]: libpod-conmon-330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46.scope: Deactivated successfully.
Nov 29 00:11:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 00:11:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 00:11:30 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 29 00:11:30 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 29 00:11:30 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 29 00:11:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 00:11:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 00:11:31 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 00:11:31 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:31 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:31 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:31 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=68/69 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68) [2] r=0 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=68/69 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s; 208 B/s, 12 objects/s recovering
Nov 29 00:11:31 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 29 00:11:31 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 29 00:11:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 00:11:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 00:11:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 00:11:32 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 70 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=69/70 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:32 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 70 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=69/70 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 990 B/s rd, 0 op/s; 186 B/s, 11 objects/s recovering
Nov 29 00:11:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 00:11:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 00:11:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 00:11:33 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=69/70 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.196173668s) [2] async=[2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 126.239318848s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:33 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=69/70 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.196031570s) [2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.239318848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:33 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.192161560s) [2] async=[2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 126.235893250s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:33 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.192124367s) [2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.235893250s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:33 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:33 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:33 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:33 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:33 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 29 00:11:33 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 29 00:11:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 00:11:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 00:11:34 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 00:11:34 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 72 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=71/72 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:34 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 72 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=71/72 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:34 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 29 00:11:34 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 29 00:11:34 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Nov 29 00:11:34 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Nov 29 00:11:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 496 B/s wr, 3 op/s; 53 B/s, 3 objects/s recovering
Nov 29 00:11:36 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 29 00:11:36 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 29 00:11:36 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 29 00:11:36 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 29 00:11:36 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 00:11:36 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 00:11:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s; 36 B/s, 2 objects/s recovering
Nov 29 00:11:37 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 29 00:11:37 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 29 00:11:38 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 29 00:11:38 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 29 00:11:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 287 B/s wr, 1 op/s; 30 B/s, 1 objects/s recovering
Nov 29 00:11:39 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 29 00:11:39 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:11:41
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 27 B/s, 1 objects/s recovering
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:11:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 00:11:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 00:11:41 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 29 00:11:41 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 29 00:11:41 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 29 00:11:41 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 29 00:11:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 00:11:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 00:11:42 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 73 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=8.403535843s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 128.637680054s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:42 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 73 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=8.403483391s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.637680054s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:42 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:42 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 00:11:42 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 00:11:42 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Nov 29 00:11:42 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Nov 29 00:11:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 355 B/s rd, 118 B/s wr, 0 op/s
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 00:11:43 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 00:11:43 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 74 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=73/74 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:43 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 29 00:11:43 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Nov 29 00:11:43 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 74 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=9.426406860s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 130.664230347s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:43 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 74 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=9.425595284s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.664230347s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:43 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 74 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74) [0] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:43 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 29 00:11:43 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Nov 29 00:11:43 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 29 00:11:43 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 29 00:11:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 00:11:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 00:11:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 00:11:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 00:11:44 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 00:11:44 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 75 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=74/75 n=1 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74) [0] r=0 lpr=74 pi=[54,74)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:45 np0005539482 systemd-logind[793]: New session 33 of user zuul.
Nov 29 00:11:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 00:11:45 np0005539482 systemd[1]: Started Session 33 of User zuul.
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 00:11:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 00:11:46 np0005539482 python3.9[105248]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:11:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 00:11:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 00:11:46 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 29 00:11:46 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 29 00:11:46 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 76 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76 pruub=13.412634850s) [1] r=-1 lpr=76 pi=[58,76)/1 crt=35'39 mlcod 35'39 active pruub 142.429672241s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:46 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 76 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76 pruub=13.412522316s) [1] r=-1 lpr=76 pi=[58,76)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 142.429672241s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:46 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 76 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76) [1] r=0 lpr=76 pi=[58,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 00:11:47 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 00:11:47 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.211524963s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 140.425521851s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:47 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.211388588s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.425521851s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:47 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.224774361s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 140.439880371s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:47 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.224705696s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.439880371s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:47 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=76/77 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76) [1] r=0 lpr=76 pi=[58,76)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77) [2] r=0 lpr=77 pi=[47,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:47 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77) [2] r=0 lpr=77 pi=[47,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:47 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 29 00:11:47 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 29 00:11:48 np0005539482 python3.9[105466]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:11:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 00:11:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 00:11:48 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 00:11:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 00:11:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 00:11:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:48 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:48 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:48 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:48 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:48 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 00:11:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 00:11:49 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 29 00:11:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 00:11:49 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 00:11:49 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 29 00:11:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 79 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:49 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 79 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:50 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 286051a6-d671-4dd3-8a75-0b2cc1f8ff52 does not exist
Nov 29 00:11:50 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a250bf51-f3c1-4ce2-85e2-cb8b89a33a48 does not exist
Nov 29 00:11:50 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 20c5587b-4f55-474b-bdc4-a5e09dd63767 does not exist
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:11:50 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 29 00:11:50 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:11:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.997946739s) [2] async=[2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 143.250976562s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.997536659s) [2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.250976562s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.995174408s) [2] async=[2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 143.249145508s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:50 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.995050430s) [2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.249145508s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:50 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:50 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:50 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:50 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:11:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 00:11:51 np0005539482 podman[105750]: 2025-11-29 05:11:51.497867892 +0000 UTC m=+0.053234081 container create 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:11:51 np0005539482 systemd[1]: Started libpod-conmon-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope.
Nov 29 00:11:51 np0005539482 podman[105750]: 2025-11-29 05:11:51.469055185 +0000 UTC m=+0.024421434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:11:51 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:51 np0005539482 podman[105750]: 2025-11-29 05:11:51.606788211 +0000 UTC m=+0.162154450 container init 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:11:51 np0005539482 podman[105750]: 2025-11-29 05:11:51.621190909 +0000 UTC m=+0.176557108 container start 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:11:51 np0005539482 podman[105750]: 2025-11-29 05:11:51.625072441 +0000 UTC m=+0.180438630 container attach 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:11:51 np0005539482 systemd[1]: libpod-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope: Deactivated successfully.
Nov 29 00:11:51 np0005539482 keen_shannon[105769]: 167 167
Nov 29 00:11:51 np0005539482 conmon[105769]: conmon 97941308151bd1bc5ca1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope/container/memory.events
Nov 29 00:11:51 np0005539482 podman[105750]: 2025-11-29 05:11:51.633616121 +0000 UTC m=+0.188982310 container died 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:11:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d4c6fe43e6c3169cc5f596cdc6d6037d5f2e441a7a69310394c11d6344311ffa-merged.mount: Deactivated successfully.
Nov 29 00:11:51 np0005539482 podman[105750]: 2025-11-29 05:11:51.690114288 +0000 UTC m=+0.245480477 container remove 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:11:51 np0005539482 systemd[1]: libpod-conmon-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope: Deactivated successfully.
Nov 29 00:11:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 00:11:51 np0005539482 podman[105794]: 2025-11-29 05:11:51.880052251 +0000 UTC m=+0.056544169 container create 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:11:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 00:11:51 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 00:11:51 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 81 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:51 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 81 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:51 np0005539482 systemd[1]: Started libpod-conmon-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope.
Nov 29 00:11:51 np0005539482 podman[105794]: 2025-11-29 05:11:51.850037076 +0000 UTC m=+0.026529064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:11:51 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:51 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:52 np0005539482 podman[105794]: 2025-11-29 05:11:52.008116819 +0000 UTC m=+0.184608797 container init 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:11:52 np0005539482 podman[105794]: 2025-11-29 05:11:52.028344904 +0000 UTC m=+0.204836842 container start 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:11:52 np0005539482 podman[105794]: 2025-11-29 05:11:52.032730277 +0000 UTC m=+0.209222215 container attach 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:11:53 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 29 00:11:53 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 29 00:11:53 np0005539482 tender_ganguly[105811]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:11:53 np0005539482 tender_ganguly[105811]: --> relative data size: 1.0
Nov 29 00:11:53 np0005539482 tender_ganguly[105811]: --> All data devices are unavailable
Nov 29 00:11:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 00:11:53 np0005539482 systemd[1]: libpod-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope: Deactivated successfully.
Nov 29 00:11:53 np0005539482 systemd[1]: libpod-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope: Consumed 1.238s CPU time.
Nov 29 00:11:53 np0005539482 podman[105794]: 2025-11-29 05:11:53.324208676 +0000 UTC m=+1.500700614 container died 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:11:53 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 29 00:11:53 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 29 00:11:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4-merged.mount: Deactivated successfully.
Nov 29 00:11:53 np0005539482 podman[105794]: 2025-11-29 05:11:53.986561277 +0000 UTC m=+2.163053215 container remove 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:11:54 np0005539482 systemd[1]: libpod-conmon-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope: Deactivated successfully.
Nov 29 00:11:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:54 np0005539482 podman[105997]: 2025-11-29 05:11:54.707333239 +0000 UTC m=+0.039764375 container create 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 00:11:54 np0005539482 systemd[1]: Started libpod-conmon-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope.
Nov 29 00:11:54 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:54 np0005539482 podman[105997]: 2025-11-29 05:11:54.774861095 +0000 UTC m=+0.107292241 container init 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:11:54 np0005539482 podman[105997]: 2025-11-29 05:11:54.782019734 +0000 UTC m=+0.114450870 container start 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:11:54 np0005539482 podman[105997]: 2025-11-29 05:11:54.688073928 +0000 UTC m=+0.020505074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:11:54 np0005539482 podman[105997]: 2025-11-29 05:11:54.785585198 +0000 UTC m=+0.118016354 container attach 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:11:54 np0005539482 boring_aryabhata[106018]: 167 167
Nov 29 00:11:54 np0005539482 systemd[1]: libpod-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope: Deactivated successfully.
Nov 29 00:11:54 np0005539482 conmon[106018]: conmon 7e8d87b9b161bbcb9227 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope/container/memory.events
Nov 29 00:11:54 np0005539482 podman[105997]: 2025-11-29 05:11:54.789213793 +0000 UTC m=+0.121644929 container died 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:11:54 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6d294f2027b7a96ad5a030fae906ef3dcc47d967df7d9e34bcfedaf5ea094d02-merged.mount: Deactivated successfully.
Nov 29 00:11:54 np0005539482 podman[105997]: 2025-11-29 05:11:54.832738266 +0000 UTC m=+0.165169412 container remove 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:11:54 np0005539482 systemd[1]: libpod-conmon-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope: Deactivated successfully.
Nov 29 00:11:54 np0005539482 podman[106042]: 2025-11-29 05:11:54.992385826 +0000 UTC m=+0.039584560 container create b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:11:55 np0005539482 systemd[1]: Started libpod-conmon-b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab.scope.
Nov 29 00:11:55 np0005539482 podman[106042]: 2025-11-29 05:11:54.975290875 +0000 UTC m=+0.022489629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:11:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:55 np0005539482 podman[106042]: 2025-11-29 05:11:55.110170163 +0000 UTC m=+0.157368987 container init b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:11:55 np0005539482 podman[106042]: 2025-11-29 05:11:55.120376533 +0000 UTC m=+0.167575267 container start b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:11:55 np0005539482 podman[106042]: 2025-11-29 05:11:55.123993728 +0000 UTC m=+0.171192522 container attach b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:11:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 00:11:55 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 29 00:11:55 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 29 00:11:55 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 29 00:11:55 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]: {
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:    "0": [
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:        {
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "devices": [
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "/dev/loop3"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            ],
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_name": "ceph_lv0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_size": "21470642176",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "name": "ceph_lv0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "tags": {
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cluster_name": "ceph",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.crush_device_class": "",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.encrypted": "0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osd_id": "0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.type": "block",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.vdo": "0"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            },
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "type": "block",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "vg_name": "ceph_vg0"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:        }
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:    ],
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:    "1": [
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:        {
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "devices": [
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "/dev/loop4"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            ],
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_name": "ceph_lv1",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_size": "21470642176",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "name": "ceph_lv1",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "tags": {
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cluster_name": "ceph",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.crush_device_class": "",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.encrypted": "0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osd_id": "1",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.type": "block",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.vdo": "0"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            },
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "type": "block",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "vg_name": "ceph_vg1"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:        }
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:    ],
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:    "2": [
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:        {
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "devices": [
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "/dev/loop5"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            ],
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_name": "ceph_lv2",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_size": "21470642176",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "name": "ceph_lv2",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "tags": {
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.cluster_name": "ceph",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.crush_device_class": "",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.encrypted": "0",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osd_id": "2",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.type": "block",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:                "ceph.vdo": "0"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            },
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "type": "block",
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:            "vg_name": "ceph_vg2"
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:        }
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]:    ]
Nov 29 00:11:55 np0005539482 cool_mestorf[106061]: }
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 00:11:55 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 00:11:55 np0005539482 systemd[1]: libpod-b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab.scope: Deactivated successfully.
Nov 29 00:11:55 np0005539482 podman[106042]: 2025-11-29 05:11:55.98694516 +0000 UTC m=+1.034143924 container died b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 00:11:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 82 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=62/63 n=1 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82 pruub=12.563106537s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=35'39 mlcod 35'39 active pruub 150.616012573s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:11:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 82 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=62/63 n=1 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82 pruub=12.563038826s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 150.616012573s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 00:11:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 00:11:55 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 29 00:11:56 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 82 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:11:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405-merged.mount: Deactivated successfully.
Nov 29 00:11:56 np0005539482 podman[106042]: 2025-11-29 05:11:56.057251752 +0000 UTC m=+1.104450476 container remove b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:11:56 np0005539482 systemd[1]: libpod-conmon-b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab.scope: Deactivated successfully.
Nov 29 00:11:56 np0005539482 systemd[1]: session-33.scope: Deactivated successfully.
Nov 29 00:11:56 np0005539482 systemd[1]: session-33.scope: Consumed 8.551s CPU time.
Nov 29 00:11:56 np0005539482 systemd-logind[793]: Session 33 logged out. Waiting for processes to exit.
Nov 29 00:11:56 np0005539482 systemd-logind[793]: Removed session 33.
Nov 29 00:11:56 np0005539482 podman[106247]: 2025-11-29 05:11:56.652304001 +0000 UTC m=+0.034984043 container create 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:11:56 np0005539482 systemd[1]: Started libpod-conmon-5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16.scope.
Nov 29 00:11:56 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:56 np0005539482 podman[106247]: 2025-11-29 05:11:56.73096873 +0000 UTC m=+0.113648772 container init 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:11:56 np0005539482 podman[106247]: 2025-11-29 05:11:56.637615486 +0000 UTC m=+0.020295548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:11:56 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 00:11:56 np0005539482 podman[106247]: 2025-11-29 05:11:56.737043922 +0000 UTC m=+0.119723954 container start 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:11:56 np0005539482 podman[106247]: 2025-11-29 05:11:56.740051182 +0000 UTC m=+0.122731244 container attach 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:11:56 np0005539482 nervous_napier[106264]: 167 167
Nov 29 00:11:56 np0005539482 systemd[1]: libpod-5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16.scope: Deactivated successfully.
Nov 29 00:11:56 np0005539482 podman[106247]: 2025-11-29 05:11:56.74248881 +0000 UTC m=+0.125168852 container died 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:11:56 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 00:11:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a1068e914adbb856978618698790c627e6792772b3f39c07dc3ffb76bdcbbc52-merged.mount: Deactivated successfully.
Nov 29 00:11:56 np0005539482 podman[106247]: 2025-11-29 05:11:56.783885432 +0000 UTC m=+0.166565474 container remove 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 00:11:56 np0005539482 systemd[1]: libpod-conmon-5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16.scope: Deactivated successfully.
Nov 29 00:11:56 np0005539482 podman[106288]: 2025-11-29 05:11:56.933015516 +0000 UTC m=+0.039061029 container create 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:11:56 np0005539482 systemd[1]: Started libpod-conmon-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope.
Nov 29 00:11:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 00:11:56 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 00:11:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 00:11:57 np0005539482 podman[106288]: 2025-11-29 05:11:56.914409639 +0000 UTC m=+0.020455182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 00:11:57 np0005539482 podman[106288]: 2025-11-29 05:11:57.016719592 +0000 UTC m=+0.122765105 container init 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:11:57 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 83 pg[6.d( v 35'39 lc 31'13 (0'0,35'39] local-lis/les=82/83 n=1 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:11:57 np0005539482 podman[106288]: 2025-11-29 05:11:57.029524643 +0000 UTC m=+0.135570186 container start 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:11:57 np0005539482 podman[106288]: 2025-11-29 05:11:57.033860075 +0000 UTC m=+0.139905608 container attach 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:11:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 00:11:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 00:11:57 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 29 00:11:57 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 00:11:58 np0005539482 romantic_payne[106304]: {
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "osd_id": 0,
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "type": "bluestore"
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:    },
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "osd_id": 1,
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "type": "bluestore"
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:    },
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "osd_id": 2,
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:        "type": "bluestore"
Nov 29 00:11:58 np0005539482 romantic_payne[106304]:    }
Nov 29 00:11:58 np0005539482 romantic_payne[106304]: }
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 00:11:58 np0005539482 systemd[1]: libpod-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope: Deactivated successfully.
Nov 29 00:11:58 np0005539482 systemd[1]: libpod-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope: Consumed 1.053s CPU time.
Nov 29 00:11:58 np0005539482 conmon[106304]: conmon 06df2283b28692d5aa1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope/container/memory.events
Nov 29 00:11:58 np0005539482 podman[106288]: 2025-11-29 05:11:58.079481009 +0000 UTC m=+1.185526532 container died 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:11:58 np0005539482 systemd[1]: var-lib-containers-storage-overlay-77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521-merged.mount: Deactivated successfully.
Nov 29 00:11:58 np0005539482 podman[106288]: 2025-11-29 05:11:58.131391318 +0000 UTC m=+1.237436851 container remove 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:11:58 np0005539482 systemd[1]: libpod-conmon-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope: Deactivated successfully.
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:11:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:58 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e100450d-b0d5-4482-9732-0c64a871f559 does not exist
Nov 29 00:11:58 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 4c11a984-9d3f-4b95-99c7-b04d79c9e40d does not exist
Nov 29 00:11:58 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Nov 29 00:11:58 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Nov 29 00:11:58 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 29 00:11:59 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:11:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 00:11:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 00:12:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 00:12:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 00:12:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 00:12:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 00:12:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 00:12:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 00:12:00 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 00:12:00 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 85 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85 pruub=8.313556671s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=35'39 mlcod 35'39 active pruub 150.433609009s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:00 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 85 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85 pruub=8.313452721s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 150.433609009s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:00 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 85 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:00 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 29 00:12:00 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 29 00:12:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 00:12:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 00:12:01 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 00:12:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 00:12:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 00:12:01 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 86 pg[6.f( v 35'39 lc 31'1 (0'0,35'39] local-lis/les=85/86 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Nov 29 00:12:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 00:12:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 00:12:01 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Nov 29 00:12:01 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Nov 29 00:12:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 00:12:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 00:12:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 00:12:02 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 00:12:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 00:12:02 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 29 00:12:02 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 29 00:12:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 00:12:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 00:12:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 00:12:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 00:12:03 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 29 00:12:03 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 29 00:12:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 00:12:04 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 00:12:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 00:12:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 00:12:04 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 00:12:04 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 29 00:12:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:04 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 29 00:12:04 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 29 00:12:04 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 29 00:12:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 00:12:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 0 objects/s recovering
Nov 29 00:12:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 00:12:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 00:12:05 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 29 00:12:05 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 29 00:12:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 00:12:06 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 00:12:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 00:12:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 00:12:06 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 00:12:06 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.9 deep-scrub starts
Nov 29 00:12:06 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.9 deep-scrub ok
Nov 29 00:12:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 00:12:07 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 00:12:07 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 00:12:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 29 00:12:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 00:12:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 00:12:07 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 29 00:12:07 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 29 00:12:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 00:12:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 00:12:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 00:12:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 00:12:08 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 00:12:08 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 29 00:12:08 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 29 00:12:08 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 29 00:12:08 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 29 00:12:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 00:12:09 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 90 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90 pruub=10.151612282s) [2] r=-1 lpr=90 pi=[56,90)/1 crt=38'583 mlcod 0'0 active pruub 161.373535156s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:09 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 90 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90 pruub=10.151553154s) [2] r=-1 lpr=90 pi=[56,90)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 161.373535156s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:09 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 00:12:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 00:12:09 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 00:12:09 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 91 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=0 lpr=91 pi=[56,91)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:09 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 91 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=0 lpr=91 pi=[56,91)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:09 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[56,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:09 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[56,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:09 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 29 00:12:09 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 29 00:12:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 115 B/s, 0 objects/s recovering
Nov 29 00:12:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 00:12:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 00:12:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 00:12:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 00:12:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 00:12:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 00:12:10 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 00:12:10 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 92 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=91/92 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] async=[2] r=0 lpr=91 pi=[56,91)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:10 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 00:12:10 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 00:12:10 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 00:12:10 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 00:12:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 00:12:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 00:12:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 00:12:11 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 00:12:11 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=91/92 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93 pruub=15.044969559s) [2] async=[2] r=-1 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 38'583 active pruub 168.314895630s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:11 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=91/92 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93 pruub=15.044779778s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 168.314895630s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:11 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:11 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:11 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Nov 29 00:12:11 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Nov 29 00:12:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:12:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:12:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:12:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:12:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:12:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:12:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:12:11 np0005539482 systemd-logind[793]: New session 34 of user zuul.
Nov 29 00:12:11 np0005539482 systemd[1]: Started Session 34 of User zuul.
Nov 29 00:12:11 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 00:12:11 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 00:12:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 00:12:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 00:12:12 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 00:12:12 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 94 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=93/94 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:12 np0005539482 python3.9[106554]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 00:12:12 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 29 00:12:12 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 29 00:12:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:12:14 np0005539482 python3.9[106728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:12:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:14 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 29 00:12:14 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Nov 29 00:12:14 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 29 00:12:14 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Nov 29 00:12:15 np0005539482 python3.9[106884]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:12:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 00:12:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 00:12:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 00:12:16 np0005539482 python3.9[107037]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:12:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 00:12:16 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 00:12:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 00:12:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 00:12:16 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 00:12:16 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 95 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95 pruub=10.042437553s) [1] r=-1 lpr=95 pi=[55,95)/1 crt=38'583 mlcod 0'0 active pruub 168.364135742s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:16 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 95 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95 pruub=10.042379379s) [1] r=-1 lpr=95 pi=[55,95)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 168.364135742s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:16 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95) [1] r=0 lpr=95 pi=[55,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:16 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Nov 29 00:12:16 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Nov 29 00:12:17 np0005539482 python3.9[107191]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:12:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 00:12:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 00:12:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 00:12:17 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 00:12:17 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[55,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:17 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[55,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:17 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 96 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=0 lpr=96 pi=[55,96)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:17 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 96 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=0 lpr=96 pi=[55,96)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 00:12:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 00:12:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 00:12:17 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Nov 29 00:12:17 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Nov 29 00:12:18 np0005539482 python3.9[107343]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:12:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 00:12:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 00:12:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 00:12:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 00:12:18 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 00:12:18 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 97 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=97 pruub=15.896838188s) [0] r=-1 lpr=97 pi=[67,97)/1 crt=38'583 mlcod 0'0 active pruub 166.364501953s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:18 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 97 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=97 pruub=15.896558762s) [0] r=-1 lpr=97 pi=[67,97)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 166.364501953s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:18 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=97) [0] r=0 lpr=97 pi=[67,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:18 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 97 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=96/97 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[55,96)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:19 np0005539482 python3.9[107493]: ansible-ansible.builtin.service_facts Invoked
Nov 29 00:12:19 np0005539482 network[107510]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 00:12:19 np0005539482 network[107511]: 'network-scripts' will be removed from distribution in near future.
Nov 29 00:12:19 np0005539482 network[107512]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 00:12:19 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 29 00:12:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 00:12:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 00:12:19 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 00:12:19 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 29 00:12:19 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 98 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=0 lpr=98 pi=[67,98)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:19 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 98 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=0 lpr=98 pi=[67,98)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:19 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[67,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:19 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[67,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:19 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=96/97 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98 pruub=15.741697311s) [1] async=[1] r=-1 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 38'583 active pruub 176.988769531s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:19 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=96/97 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98 pruub=15.741625786s) [1] r=-1 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 176.988769531s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:19 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:19 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 00:12:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 00:12:20 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 29 00:12:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 00:12:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 00:12:20 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 00:12:20 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 29 00:12:20 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 99 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:20 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 00:12:20 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 99 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] async=[0] r=0 lpr=98 pi=[67,98)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:20 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 00:12:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 00:12:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 00:12:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 00:12:21 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 00:12:21 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:21 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:21 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100 pruub=15.557282448s) [0] async=[0] r=-1 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 38'583 active pruub 169.064346313s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:21 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100 pruub=15.557166100s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 169.064346313s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:22 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 29 00:12:22 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 29 00:12:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 00:12:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 00:12:22 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 00:12:22 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 101 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=100/101 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:22 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 29 00:12:22 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 29 00:12:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Nov 29 00:12:23 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 00:12:23 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 29 00:12:23 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 00:12:23 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 29 00:12:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:24 np0005539482 python3.9[107774]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:12:24 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 29 00:12:24 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 29 00:12:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 00:12:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 00:12:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 00:12:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 00:12:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 00:12:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 00:12:25 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 00:12:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 00:12:25 np0005539482 python3.9[107924]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:12:26 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 00:12:26 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 29 00:12:26 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 29 00:12:27 np0005539482 python3.9[108078]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:12:27 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 29 00:12:27 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 29 00:12:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 00:12:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 00:12:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 00:12:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 00:12:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 00:12:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 00:12:27 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 00:12:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 00:12:27 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Nov 29 00:12:27 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Nov 29 00:12:28 np0005539482 python3.9[108236]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:12:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 00:12:28 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 29 00:12:28 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:29 np0005539482 python3.9[108320]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:12:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 00:12:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 00:12:29 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 29 00:12:29 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 29 00:12:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 00:12:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 104 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=104 pruub=11.739901543s) [2] r=-1 lpr=104 pi=[55,104)/1 crt=38'583 mlcod 0'0 active pruub 184.364822388s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:30 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 104 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=104 pruub=11.739595413s) [2] r=-1 lpr=104 pi=[55,104)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 184.364822388s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:30 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=104) [2] r=0 lpr=104 pi=[55,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:30 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 29 00:12:30 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 29 00:12:31 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 29 00:12:31 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 29 00:12:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:12:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 00:12:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 00:12:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 00:12:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 00:12:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 00:12:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 00:12:31 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 00:12:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 105 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[55,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:31 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 105 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[55,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:31 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 105 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=0 lpr=105 pi=[55,105)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:31 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 105 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=0 lpr=105 pi=[55,105)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 00:12:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 00:12:32 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 00:12:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 00:12:32 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 29 00:12:32 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 29 00:12:33 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 106 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=105/106 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] async=[2] r=0 lpr=105 pi=[55,105)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:12:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 00:12:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 00:12:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 00:12:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 00:12:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 00:12:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 00:12:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 00:12:33 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=105/106 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107 pruub=15.618248940s) [2] async=[2] r=-1 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 38'583 active pruub 191.227386475s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:33 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=105/106 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107 pruub=15.618089676s) [2] r=-1 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 191.227386475s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:33 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107) [2] r=0 lpr=107 pi=[55,107)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:33 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107) [2] r=0 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:33 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 29 00:12:33 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 29 00:12:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 00:12:34 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 00:12:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 00:12:34 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 00:12:34 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 108 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=107/108 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107) [2] r=0 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Nov 29 00:12:35 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Nov 29 00:12:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 29 00:12:36 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 29 00:12:36 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 29 00:12:36 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 29 00:12:37 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 29 00:12:37 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 29 00:12:37 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 29 00:12:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 00:12:37 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 29 00:12:37 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 29 00:12:39 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 29 00:12:39 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 29 00:12:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:39 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 29 00:12:39 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 29 00:12:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Nov 29 00:12:39 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 00:12:39 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 00:12:40 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 29 00:12:40 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 29 00:12:40 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 29 00:12:40 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:12:41
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 00:12:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 00:12:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:12:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:12:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 00:12:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 00:12:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 00:12:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 00:12:41 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 109 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=14.298078537s) [0] r=-1 lpr=109 pi=[80,109)/1 crt=38'583 mlcod 0'0 active pruub 188.080856323s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:41 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 109 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=14.298032761s) [0] r=-1 lpr=109 pi=[80,109)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 188.080856323s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 00:12:41 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=109) [0] r=0 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:41 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 29 00:12:41 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 29 00:12:42 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 29 00:12:42 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 29 00:12:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 00:12:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 00:12:42 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 00:12:42 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:42 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 00:12:42 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 110 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=0 lpr=110 pi=[80,110)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:42 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 110 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=0 lpr=110 pi=[80,110)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:43 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 29 00:12:43 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 29 00:12:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:12:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 00:12:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 00:12:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 00:12:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 00:12:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 00:12:43 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 00:12:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 00:12:43 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 111 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=110/111 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[80,110)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 00:12:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 00:12:44 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 00:12:44 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=110/111 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.749832153s) [0] async=[0] r=-1 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 38'583 active pruub 192.114669800s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:44 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=110/111 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.749622345s) [0] r=-1 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 192.114669800s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:44 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:44 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 00:12:45 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 29 00:12:45 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 29 00:12:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 00:12:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 00:12:45 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 00:12:45 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 113 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=112/113 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Nov 29 00:12:46 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 29 00:12:46 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 29 00:12:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 29 00:12:47 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 29 00:12:48 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 29 00:12:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:49 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 00:12:49 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 00:12:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 00:12:50 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 29 00:12:50 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:12:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 00:12:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 00:12:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 00:12:52 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 29 00:12:52 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 29 00:12:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 00:12:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 00:12:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 00:12:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 00:12:52 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 00:12:52 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 114 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114 pruub=13.921597481s) [0] r=-1 lpr=114 pi=[67,114)/1 crt=38'583 mlcod 0'0 active pruub 198.359375000s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:52 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 114 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114 pruub=13.921504021s) [0] r=-1 lpr=114 pi=[67,114)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 198.359375000s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:52 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114) [0] r=0 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 00:12:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 00:12:53 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 00:12:53 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[67,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:53 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[67,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 00:12:53 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 115 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=0 lpr=115 pi=[67,115)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:53 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 115 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=0 lpr=115 pi=[67,115)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:12:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 00:12:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:12:54 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 29 00:12:54 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 29 00:12:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 00:12:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:12:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 00:12:54 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 00:12:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 116 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=116 pruub=12.907160759s) [1] r=-1 lpr=116 pi=[68,116)/1 crt=38'583 mlcod 0'0 active pruub 199.364028931s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 116 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=116 pruub=12.906765938s) [1] r=-1 lpr=116 pi=[68,116)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 199.364028931s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 00:12:54 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=116) [1] r=0 lpr=116 pi=[68,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:54 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 116 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=115/116 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[67,115)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:55 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 00:12:55 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 00:12:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 00:12:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 00:12:55 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 00:12:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=0 lpr=117 pi=[68,117)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=0 lpr=117 pi=[68,117)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=115/116 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117 pruub=15.401042938s) [0] async=[0] r=-1 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 38'583 active pruub 202.881149292s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:55 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=115/116 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117 pruub=15.400755882s) [0] r=-1 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 202.881149292s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 00:12:55 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:55 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117) [0] r=0 lpr=117 pi=[67,117)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:55 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117) [0] r=0 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 29 00:12:56 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 29 00:12:56 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 29 00:12:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 00:12:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 00:12:56 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 00:12:56 np0005539482 ceph-osd[89151]: osd.0 pg_epoch: 118 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117) [0] r=0 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:57 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 118 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[68,117)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 29 00:12:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 00:12:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 00:12:57 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 00:12:57 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119 pruub=15.632976532s) [1] async=[1] r=-1 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 38'583 active pruub 205.383071899s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:57 np0005539482 ceph-osd[91343]: osd.2 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119 pruub=15.632826805s) [1] r=-1 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 205.383071899s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 00:12:57 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 00:12:57 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 00:12:58 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.f deep-scrub starts
Nov 29 00:12:58 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.f deep-scrub ok
Nov 29 00:12:58 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 29 00:12:58 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 29 00:12:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 00:12:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 00:12:58 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 00:12:58 np0005539482 ceph-osd[90181]: osd.1 pg_epoch: 120 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=119/120 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:12:59 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 593b6ef3-479b-45bf-be54-54f51eadb4ff does not exist
Nov 29 00:12:59 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 01084ed7-be77-4add-80b7-c683970055d0 does not exist
Nov 29 00:12:59 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 720b8e6c-eb94-49ad-b9fa-9e6b0a315aff does not exist
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:12:59 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 29 00:12:59 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 29 00:12:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%); 27 B/s, 1 objects/s recovering
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:12:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:12:59 np0005539482 podman[108738]: 2025-11-29 05:12:59.678396611 +0000 UTC m=+0.048318543 container create cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:12:59 np0005539482 systemd[1]: Started libpod-conmon-cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba.scope.
Nov 29 00:12:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:12:59 np0005539482 podman[108738]: 2025-11-29 05:12:59.650885455 +0000 UTC m=+0.020807417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:12:59 np0005539482 podman[108738]: 2025-11-29 05:12:59.758313781 +0000 UTC m=+0.128235733 container init cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:12:59 np0005539482 podman[108738]: 2025-11-29 05:12:59.764923757 +0000 UTC m=+0.134845689 container start cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:12:59 np0005539482 podman[108738]: 2025-11-29 05:12:59.768298213 +0000 UTC m=+0.138220165 container attach cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:12:59 np0005539482 adoring_haibt[108755]: 167 167
Nov 29 00:12:59 np0005539482 systemd[1]: libpod-cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba.scope: Deactivated successfully.
Nov 29 00:12:59 np0005539482 podman[108738]: 2025-11-29 05:12:59.771070853 +0000 UTC m=+0.140992785 container died cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:12:59 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5bea31166212c86de9f55b8feeeb892f6c3430950f2a9e9231b6fc3709225b9c-merged.mount: Deactivated successfully.
Nov 29 00:12:59 np0005539482 podman[108738]: 2025-11-29 05:12:59.82554064 +0000 UTC m=+0.195462572 container remove cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:12:59 np0005539482 systemd[1]: libpod-conmon-cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba.scope: Deactivated successfully.
Nov 29 00:12:59 np0005539482 podman[108779]: 2025-11-29 05:12:59.975467279 +0000 UTC m=+0.039749946 container create 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:13:00 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 29 00:13:00 np0005539482 systemd[1]: Started libpod-conmon-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope.
Nov 29 00:13:00 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 29 00:13:00 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:13:00 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:00 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:00 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:00 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:00 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:00 np0005539482 podman[108779]: 2025-11-29 05:13:00.044127215 +0000 UTC m=+0.108409912 container init 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:13:00 np0005539482 podman[108779]: 2025-11-29 05:12:59.956285494 +0000 UTC m=+0.020568181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:13:00 np0005539482 podman[108779]: 2025-11-29 05:13:00.053846351 +0000 UTC m=+0.118129008 container start 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:13:00 np0005539482 podman[108779]: 2025-11-29 05:13:00.070300196 +0000 UTC m=+0.134582893 container attach 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:13:01 np0005539482 sad_margulis[108796]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:13:01 np0005539482 sad_margulis[108796]: --> relative data size: 1.0
Nov 29 00:13:01 np0005539482 sad_margulis[108796]: --> All data devices are unavailable
Nov 29 00:13:01 np0005539482 systemd[1]: libpod-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope: Deactivated successfully.
Nov 29 00:13:01 np0005539482 systemd[1]: libpod-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope: Consumed 1.061s CPU time.
Nov 29 00:13:01 np0005539482 podman[108779]: 2025-11-29 05:13:01.175607605 +0000 UTC m=+1.239890302 container died 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:13:01 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590-merged.mount: Deactivated successfully.
Nov 29 00:13:01 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 29 00:13:01 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 29 00:13:01 np0005539482 podman[108779]: 2025-11-29 05:13:01.284709812 +0000 UTC m=+1.348992489 container remove 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:13:01 np0005539482 systemd[1]: libpod-conmon-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope: Deactivated successfully.
Nov 29 00:13:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 00:13:02 np0005539482 podman[108979]: 2025-11-29 05:13:02.015638888 +0000 UTC m=+0.067839857 container create f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:13:02 np0005539482 systemd[1]: Started libpod-conmon-f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c.scope.
Nov 29 00:13:02 np0005539482 podman[108979]: 2025-11-29 05:13:01.987209138 +0000 UTC m=+0.039410157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:13:02 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:13:02 np0005539482 podman[108979]: 2025-11-29 05:13:02.124949111 +0000 UTC m=+0.177150120 container init f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:13:02 np0005539482 podman[108979]: 2025-11-29 05:13:02.133487516 +0000 UTC m=+0.185688475 container start f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:13:02 np0005539482 podman[108979]: 2025-11-29 05:13:02.137251252 +0000 UTC m=+0.189452221 container attach f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:13:02 np0005539482 unruffled_engelbart[108995]: 167 167
Nov 29 00:13:02 np0005539482 systemd[1]: libpod-f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c.scope: Deactivated successfully.
Nov 29 00:13:02 np0005539482 podman[108979]: 2025-11-29 05:13:02.145007467 +0000 UTC m=+0.197208426 container died f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:13:02 np0005539482 systemd[1]: var-lib-containers-storage-overlay-523a7020b3a3a2eaa5dbf5086d2aa6461ae6ec80058d46bdc3e62e844999f233-merged.mount: Deactivated successfully.
Nov 29 00:13:02 np0005539482 podman[108979]: 2025-11-29 05:13:02.198744246 +0000 UTC m=+0.250945195 container remove f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:13:02 np0005539482 systemd[1]: libpod-conmon-f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c.scope: Deactivated successfully.
Nov 29 00:13:02 np0005539482 podman[109019]: 2025-11-29 05:13:02.434857654 +0000 UTC m=+0.066630865 container create 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 00:13:02 np0005539482 systemd[1]: Started libpod-conmon-0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35.scope.
Nov 29 00:13:02 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:13:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:02 np0005539482 podman[109019]: 2025-11-29 05:13:02.412701444 +0000 UTC m=+0.044474655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:13:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:02 np0005539482 podman[109019]: 2025-11-29 05:13:02.522687514 +0000 UTC m=+0.154460715 container init 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:13:02 np0005539482 podman[109019]: 2025-11-29 05:13:02.533431746 +0000 UTC m=+0.165204927 container start 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:13:02 np0005539482 podman[109019]: 2025-11-29 05:13:02.53634286 +0000 UTC m=+0.168116081 container attach 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:13:03 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 00:13:03 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]: {
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:    "0": [
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:        {
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "devices": [
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "/dev/loop3"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            ],
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_name": "ceph_lv0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_size": "21470642176",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "name": "ceph_lv0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "tags": {
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cluster_name": "ceph",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.crush_device_class": "",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.encrypted": "0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osd_id": "0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.type": "block",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.vdo": "0"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            },
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "type": "block",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "vg_name": "ceph_vg0"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:        }
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:    ],
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:    "1": [
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:        {
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "devices": [
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "/dev/loop4"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            ],
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_name": "ceph_lv1",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_size": "21470642176",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "name": "ceph_lv1",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "tags": {
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cluster_name": "ceph",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.crush_device_class": "",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.encrypted": "0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osd_id": "1",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.type": "block",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.vdo": "0"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            },
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "type": "block",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "vg_name": "ceph_vg1"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:        }
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:    ],
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:    "2": [
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:        {
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "devices": [
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "/dev/loop5"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            ],
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_name": "ceph_lv2",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_size": "21470642176",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "name": "ceph_lv2",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "tags": {
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.cluster_name": "ceph",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.crush_device_class": "",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.encrypted": "0",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osd_id": "2",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.type": "block",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:                "ceph.vdo": "0"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            },
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "type": "block",
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:            "vg_name": "ceph_vg2"
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:        }
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]:    ]
Nov 29 00:13:03 np0005539482 hungry_swirles[109035]: }
Nov 29 00:13:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Nov 29 00:13:03 np0005539482 systemd[1]: libpod-0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35.scope: Deactivated successfully.
Nov 29 00:13:03 np0005539482 podman[109019]: 2025-11-29 05:13:03.345086281 +0000 UTC m=+0.976859462 container died 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:13:03 np0005539482 systemd[1]: var-lib-containers-storage-overlay-05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944-merged.mount: Deactivated successfully.
Nov 29 00:13:03 np0005539482 podman[109019]: 2025-11-29 05:13:03.404599426 +0000 UTC m=+1.036372597 container remove 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:13:03 np0005539482 systemd[1]: libpod-conmon-0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35.scope: Deactivated successfully.
Nov 29 00:13:04 np0005539482 podman[109193]: 2025-11-29 05:13:04.059963401 +0000 UTC m=+0.058207793 container create 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:13:04 np0005539482 systemd[1]: Started libpod-conmon-069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810.scope.
Nov 29 00:13:04 np0005539482 podman[109193]: 2025-11-29 05:13:04.028039924 +0000 UTC m=+0.026284406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:13:04 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:13:04 np0005539482 podman[109193]: 2025-11-29 05:13:04.143081412 +0000 UTC m=+0.141325784 container init 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:13:04 np0005539482 podman[109193]: 2025-11-29 05:13:04.149031702 +0000 UTC m=+0.147276094 container start 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:13:04 np0005539482 podman[109193]: 2025-11-29 05:13:04.153313281 +0000 UTC m=+0.151557753 container attach 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:13:04 np0005539482 pedantic_yonath[109210]: 167 167
Nov 29 00:13:04 np0005539482 systemd[1]: libpod-069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810.scope: Deactivated successfully.
Nov 29 00:13:04 np0005539482 podman[109193]: 2025-11-29 05:13:04.155077685 +0000 UTC m=+0.153322057 container died 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:13:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6ea431d17ae595ae58f43bafea93c752e3ba1652781f1691396e4903e6e60623-merged.mount: Deactivated successfully.
Nov 29 00:13:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:04 np0005539482 podman[109193]: 2025-11-29 05:13:04.195502787 +0000 UTC m=+0.193747189 container remove 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:13:04 np0005539482 systemd[1]: libpod-conmon-069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810.scope: Deactivated successfully.
Nov 29 00:13:04 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 29 00:13:04 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 29 00:13:04 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 29 00:13:04 np0005539482 podman[109234]: 2025-11-29 05:13:04.359881682 +0000 UTC m=+0.039477249 container create ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:13:04 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 29 00:13:04 np0005539482 systemd[1]: Started libpod-conmon-ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671.scope.
Nov 29 00:13:04 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:13:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:13:04 np0005539482 podman[109234]: 2025-11-29 05:13:04.427342327 +0000 UTC m=+0.106937914 container init ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:13:04 np0005539482 podman[109234]: 2025-11-29 05:13:04.433697118 +0000 UTC m=+0.113292685 container start ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 00:13:04 np0005539482 podman[109234]: 2025-11-29 05:13:04.437706289 +0000 UTC m=+0.117301876 container attach ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:13:04 np0005539482 podman[109234]: 2025-11-29 05:13:04.344834881 +0000 UTC m=+0.024430468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:13:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]: {
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "osd_id": 0,
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "type": "bluestore"
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:    },
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "osd_id": 1,
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "type": "bluestore"
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:    },
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "osd_id": 2,
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:        "type": "bluestore"
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]:    }
Nov 29 00:13:05 np0005539482 frosty_heisenberg[109250]: }
Nov 29 00:13:05 np0005539482 systemd[1]: libpod-ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671.scope: Deactivated successfully.
Nov 29 00:13:05 np0005539482 podman[109234]: 2025-11-29 05:13:05.422394199 +0000 UTC m=+1.101989776 container died ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:13:05 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2-merged.mount: Deactivated successfully.
Nov 29 00:13:05 np0005539482 podman[109234]: 2025-11-29 05:13:05.470430492 +0000 UTC m=+1.150026059 container remove ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:13:05 np0005539482 systemd[1]: libpod-conmon-ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671.scope: Deactivated successfully.
Nov 29 00:13:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:13:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:13:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:13:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:13:05 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 148926dd-8c6c-4325-a946-db354d846842 does not exist
Nov 29 00:13:05 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ac049d9a-2e0d-496a-8c68-0587320fa4e0 does not exist
Nov 29 00:13:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:13:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:13:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 1 objects/s recovering
Nov 29 00:13:08 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 00:13:08 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 00:13:09 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Nov 29 00:13:09 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Nov 29 00:13:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Nov 29 00:13:09 np0005539482 python3.9[109498]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:13:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 00:13:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:13:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:13:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:13:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:13:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:13:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:13:11 np0005539482 python3.9[109785]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 00:13:12 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 29 00:13:12 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 29 00:13:12 np0005539482 python3.9[109937]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 00:13:13 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 29 00:13:13 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 29 00:13:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:13 np0005539482 python3.9[110089]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:13:14 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 29 00:13:14 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 29 00:13:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:14 np0005539482 python3.9[110241]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 00:13:15 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 29 00:13:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:15 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 29 00:13:15 np0005539482 python3.9[110393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:13:16 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 29 00:13:16 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 29 00:13:16 np0005539482 python3.9[110545]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:13:17 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 00:13:17 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 00:13:17 np0005539482 python3.9[110623]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:13:17 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 29 00:13:17 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 29 00:13:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:18 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 00:13:18 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 00:13:18 np0005539482 python3.9[110775]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:13:18 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 29 00:13:18 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 29 00:13:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:19 np0005539482 python3.9[110929]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 00:13:20 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 29 00:13:20 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 29 00:13:20 np0005539482 python3.9[111082]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 00:13:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:21 np0005539482 python3.9[111235]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 00:13:22 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Nov 29 00:13:22 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Nov 29 00:13:22 np0005539482 python3.9[111387]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 00:13:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:23 np0005539482 python3.9[111539]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:13:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:25 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 00:13:25 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 00:13:25 np0005539482 python3.9[111692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:13:26 np0005539482 python3.9[111844]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:13:26 np0005539482 python3.9[111922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:13:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:27 np0005539482 python3.9[112074]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:13:28 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 29 00:13:28 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 29 00:13:28 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 29 00:13:28 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 29 00:13:28 np0005539482 python3.9[112152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:13:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:29 np0005539482 python3.9[112304]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:13:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 00:13:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 00:13:31 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 29 00:13:31 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 29 00:13:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:31 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 00:13:31 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 00:13:31 np0005539482 python3.9[112455]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:13:32 np0005539482 python3.9[112607]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 00:13:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:33 np0005539482 python3.9[112757]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:13:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:34 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 29 00:13:34 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 29 00:13:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 29 00:13:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 29 00:13:34 np0005539482 python3.9[112909]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:13:34 np0005539482 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 00:13:34 np0005539482 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 00:13:34 np0005539482 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 00:13:34 np0005539482 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 00:13:35 np0005539482 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 00:13:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:35 np0005539482 python3.9[113071]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 00:13:36 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 29 00:13:36 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 29 00:13:36 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 00:13:36 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 00:13:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:38 np0005539482 python3.9[113223]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:13:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:39 np0005539482 python3.9[113377]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:13:40 np0005539482 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 00:13:40 np0005539482 systemd[1]: session-34.scope: Consumed 1min 5.539s CPU time.
Nov 29 00:13:40 np0005539482 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Nov 29 00:13:40 np0005539482 systemd-logind[793]: Removed session 34.
Nov 29 00:13:40 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 29 00:13:40 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 29 00:13:41 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 29 00:13:41 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:13:41
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups']
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:13:41 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:13:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:13:41 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 00:13:41 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 29 00:13:41 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 29 00:13:42 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 00:13:42 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 00:13:43 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 00:13:43 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 29 00:13:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:44 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 29 00:13:44 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 29 00:13:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:45 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 29 00:13:45 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 29 00:13:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:46 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 00:13:46 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 00:13:46 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 00:13:46 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 00:13:46 np0005539482 systemd-logind[793]: New session 35 of user zuul.
Nov 29 00:13:46 np0005539482 systemd[1]: Started Session 35 of User zuul.
Nov 29 00:13:46 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 00:13:46 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 00:13:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:47 np0005539482 python3.9[113557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:13:48 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 29 00:13:48 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 29 00:13:48 np0005539482 python3.9[113713]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 00:13:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:49 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 00:13:49 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 00:13:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:50 np0005539482 python3.9[113866]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:13:50 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 29 00:13:50 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 29 00:13:50 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 00:13:50 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 00:13:50 np0005539482 python3.9[113950]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:13:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:51 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 00:13:51 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 00:13:52 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 29 00:13:52 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 29 00:13:53 np0005539482 python3.9[114103]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:13:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:54 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 29 00:13:54 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 29 00:13:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:55 np0005539482 python3.9[114256]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:13:56 np0005539482 python3.9[114409]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:13:56 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 00:13:56 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 00:13:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:57 np0005539482 python3.9[114561]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 00:13:58 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 29 00:13:58 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 29 00:13:58 np0005539482 python3.9[114711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:13:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:13:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:13:59 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 00:13:59 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 00:13:59 np0005539482 python3.9[114869]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:14:00 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 00:14:00 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 00:14:01 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 00:14:01 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 00:14:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:14:02 np0005539482 python3.9[115022]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:14:03 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 00:14:03 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 00:14:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:14:03 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 00:14:03 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 00:14:03 np0005539482 python3.9[115309]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 00:14:04 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 29 00:14:04 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 29 00:14:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:14:04 np0005539482 python3.9[115459]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:14:04 np0005539482 systemd[76809]: Created slice User Background Tasks Slice.
Nov 29 00:14:04 np0005539482 systemd[76809]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 00:14:04 np0005539482 systemd[76809]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 00:14:05 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 29 00:14:05 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 29 00:14:05 np0005539482 python3.9[115614]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:14:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:14:05 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 00:14:05 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 00:14:05 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 00:14:06 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:14:06 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 2e5ac6cc-a889-4d23-b5d3-f4b6e7c751cc does not exist
Nov 29 00:14:06 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 3d883fe7-32d6-45d6-9707-aecaad9b7fab does not exist
Nov 29 00:14:06 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 38e67fe7-2e60-4429-9227-5ac4acbbe768 does not exist
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:14:06 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:14:06 np0005539482 podman[115940]: 2025-11-29 05:14:06.941699181 +0000 UTC m=+0.043193703 container create 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 00:14:06 np0005539482 systemd[1]: Started libpod-conmon-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope.
Nov 29 00:14:07 np0005539482 podman[115940]: 2025-11-29 05:14:06.923475333 +0000 UTC m=+0.024969855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:14:07 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:14:07 np0005539482 podman[115940]: 2025-11-29 05:14:07.053227353 +0000 UTC m=+0.154721895 container init 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:14:07 np0005539482 podman[115940]: 2025-11-29 05:14:07.06980234 +0000 UTC m=+0.171296862 container start 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:14:07 np0005539482 podman[115940]: 2025-11-29 05:14:07.073324097 +0000 UTC m=+0.174818639 container attach 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:14:07 np0005539482 systemd[1]: libpod-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope: Deactivated successfully.
Nov 29 00:14:07 np0005539482 lucid_yonath[115990]: 167 167
Nov 29 00:14:07 np0005539482 conmon[115990]: conmon 8345e7ad5a2719a8ef77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope/container/memory.events
Nov 29 00:14:07 np0005539482 podman[115940]: 2025-11-29 05:14:07.081725363 +0000 UTC m=+0.183219905 container died 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:14:07 np0005539482 systemd[1]: var-lib-containers-storage-overlay-cdae0fa13ce8fe99b5e4a497d057ea1f01f04005168c7d8dfdd75da6c86d0962-merged.mount: Deactivated successfully.
Nov 29 00:14:07 np0005539482 podman[115940]: 2025-11-29 05:14:07.139686617 +0000 UTC m=+0.241181159 container remove 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:14:07 np0005539482 systemd[1]: libpod-conmon-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope: Deactivated successfully.
Nov 29 00:14:07 np0005539482 podman[116077]: 2025-11-29 05:14:07.326675484 +0000 UTC m=+0.053357662 container create f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:14:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:14:07 np0005539482 podman[116077]: 2025-11-29 05:14:07.299178868 +0000 UTC m=+0.025861056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:14:07 np0005539482 systemd[1]: Started libpod-conmon-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope.
Nov 29 00:14:07 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:14:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:07 np0005539482 podman[116077]: 2025-11-29 05:14:07.46159356 +0000 UTC m=+0.188275738 container init f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:14:07 np0005539482 podman[116077]: 2025-11-29 05:14:07.478097186 +0000 UTC m=+0.204779364 container start f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:14:07 np0005539482 podman[116077]: 2025-11-29 05:14:07.484028612 +0000 UTC m=+0.210710790 container attach f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:14:07 np0005539482 python3.9[116072]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:14:08 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 00:14:08 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 00:14:08 np0005539482 intelligent_archimedes[116093]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:14:08 np0005539482 intelligent_archimedes[116093]: --> relative data size: 1.0
Nov 29 00:14:08 np0005539482 intelligent_archimedes[116093]: --> All data devices are unavailable
Nov 29 00:14:08 np0005539482 podman[116077]: 2025-11-29 05:14:08.575026109 +0000 UTC m=+1.301708247 container died f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 00:14:08 np0005539482 systemd[1]: libpod-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope: Deactivated successfully.
Nov 29 00:14:08 np0005539482 systemd[1]: libpod-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope: Consumed 1.045s CPU time.
Nov 29 00:14:08 np0005539482 systemd[1]: var-lib-containers-storage-overlay-31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b-merged.mount: Deactivated successfully.
Nov 29 00:14:08 np0005539482 podman[116077]: 2025-11-29 05:14:08.634615034 +0000 UTC m=+1.361297182 container remove f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:14:08 np0005539482 systemd[1]: libpod-conmon-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope: Deactivated successfully.
Nov 29 00:14:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:14:09 np0005539482 podman[116323]: 2025-11-29 05:14:09.202708379 +0000 UTC m=+0.061975525 container create 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:14:09 np0005539482 systemd[1]: Started libpod-conmon-02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29.scope.
Nov 29 00:14:09 np0005539482 podman[116323]: 2025-11-29 05:14:09.169951773 +0000 UTC m=+0.029218989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:14:09 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:14:09 np0005539482 podman[116323]: 2025-11-29 05:14:09.289412761 +0000 UTC m=+0.148679937 container init 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:14:09 np0005539482 podman[116323]: 2025-11-29 05:14:09.299398706 +0000 UTC m=+0.158665872 container start 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:14:09 np0005539482 podman[116323]: 2025-11-29 05:14:09.303564799 +0000 UTC m=+0.162832065 container attach 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:14:09 np0005539482 goofy_khorana[116370]: 167 167
Nov 29 00:14:09 np0005539482 systemd[1]: libpod-02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29.scope: Deactivated successfully.
Nov 29 00:14:09 np0005539482 podman[116323]: 2025-11-29 05:14:09.306418129 +0000 UTC m=+0.165685265 container died 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:14:09 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 29 00:14:09 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 29 00:14:09 np0005539482 systemd[1]: var-lib-containers-storage-overlay-115eeb9252794fd8aa232c5db2a0b674fef0b3bc0c7f0a5554a05e1bcfeb1bbb-merged.mount: Deactivated successfully.
Nov 29 00:14:09 np0005539482 podman[116323]: 2025-11-29 05:14:09.347897217 +0000 UTC m=+0.207164353 container remove 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:14:09 np0005539482 systemd[1]: libpod-conmon-02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29.scope: Deactivated successfully.
Nov 29 00:14:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:14:09 np0005539482 podman[116465]: 2025-11-29 05:14:09.516078502 +0000 UTC m=+0.046725100 container create 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:14:09 np0005539482 systemd[1]: Started libpod-conmon-4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68.scope.
Nov 29 00:14:09 np0005539482 podman[116465]: 2025-11-29 05:14:09.501291888 +0000 UTC m=+0.031938496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:14:09 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:14:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:14:09 np0005539482 podman[116465]: 2025-11-29 05:14:09.617750031 +0000 UTC m=+0.148396709 container init 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:14:09 np0005539482 podman[116465]: 2025-11-29 05:14:09.628653559 +0000 UTC m=+0.159300197 container start 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:14:09 np0005539482 podman[116465]: 2025-11-29 05:14:09.632690099 +0000 UTC m=+0.163336737 container attach 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:14:09 np0005539482 python3.9[116474]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:14:09 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Nov 29 00:14:10 np0005539482 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]: {
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:    "0": [
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:        {
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "devices": [
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "/dev/loop3"
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            ],
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "lv_name": "ceph_lv0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "lv_size": "21470642176",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "name": "ceph_lv0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:            "tags": {
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.cluster_name": "ceph",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.crush_device_class": "",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.encrypted": "0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.osd_id": "0",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:14:10 np0005539482 hopeful_dewdney[116485]:                "ceph.type": "block",
Nov 29 00:15:56 np0005539482 python3.9[128251]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:15:56 np0005539482 rsyslogd[1003]: imjournal: 1389 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 00:15:57 np0005539482 python3[128404]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 00:15:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:15:58 np0005539482 python3.9[128556]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:15:58 np0005539482 python3.9[128634]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:15:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:15:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:15:59 np0005539482 python3.9[128786]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:15:59 np0005539482 python3.9[128864]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:00 np0005539482 python3.9[129016]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:16:01 np0005539482 python3.9[129094]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:01 np0005539482 python3.9[129246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:16:02 np0005539482 python3.9[129324]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:03 np0005539482 python3.9[129476]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:16:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:03 np0005539482 python3.9[129554]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:04 np0005539482 python3.9[129706]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:16:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:05 np0005539482 python3.9[129861]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:06 np0005539482 python3.9[130013]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:07 np0005539482 python3.9[130165]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:08 np0005539482 python3.9[130317]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 00:16:09 np0005539482 python3.9[130469]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 00:16:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:09 np0005539482 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 00:16:09 np0005539482 systemd[1]: session-39.scope: Consumed 32.968s CPU time.
Nov 29 00:16:09 np0005539482 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Nov 29 00:16:09 np0005539482 systemd-logind[793]: Removed session 39.
Nov 29 00:16:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:16:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:16:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:16:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:16:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:16:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:16:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:15 np0005539482 systemd-logind[793]: New session 40 of user zuul.
Nov 29 00:16:15 np0005539482 systemd[1]: Started Session 40 of User zuul.
Nov 29 00:16:16 np0005539482 python3.9[130649]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 00:16:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:17 np0005539482 python3.9[130801]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:16:18 np0005539482 python3.9[130955]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 00:16:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:19 np0005539482 python3.9[131107]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.gvo9zkfa follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:16:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:19 np0005539482 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 00:16:20 np0005539482 python3.9[131234]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.gvo9zkfa mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393378.6024542-44-275381334213989/.source.gvo9zkfa _original_basename=.ovo4rsvg follow=False checksum=1b0e63c11fa90fba31690abb7f0e5ecfc577d3bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:21 np0005539482 python3.9[131434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:16:21 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev b783ab87-bc32-4f82-be81-925589042a46 does not exist
Nov 29 00:16:21 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0fa7cbe2-8bd6-493d-b2c8-6896ddbb8140 does not exist
Nov 29 00:16:21 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 768e7e1d-41d1-45a6-ac6d-acddb9181dd0 does not exist
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:16:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:16:22 np0005539482 python3.9[131771]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMckHMduWmwA/jneofKzqltVrdb/vEVNoPwADfQfHjxo2ViAjKtzRJxQm+bTvpTXgt3d3GaLwohXhYMtcnWss0rEYtIGMLiXWJAB76Vi4azFd32Hy0mDTGhpqL5tz3X/QJFmASZVWlpRz77RZoFzhuMtQpF581gmKi8QLN3n4kyPvi8IBRjIvdbSyN1hkk5nbYZFrdOhA0K7FLalaYs9fIyoD0rH+dijNp/mY8EbyOAWiPIFfzMZWqy9OkXlUKH6233dlpLGCHfD1uwqM55rv7g+qtOrKiOnqkc5b24MfjM3Dq8B/kIR3GisItM2fI/avStY0whFRyYPTqysal5H+pXy5+QCOGwsWv0POhypuwSVSbtY3NcfizytHcPT2Au6g3Xx/Gazoxx4fVkVLTjtzhz8URfMzAclsZVcUxtFyZlGHtoXumLkWdYeLYQA4dqkQVL7KwOEQp31HXuBfsc98k/UoOj9+SAEbQrLsEBhRXTSsD2bL350GMA7poDjiSC1k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwQmzwqCS97U8wjy82krUlVUeH2sOvejp9p1btw+sbe#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHbvzG6Snia8dc8X++wUykISUD7zTpLyaTM0CVExLn67fyxHoL2pCwIcx6cP7HnIRC6S3Et2Ooooe+xc0kenKn0=#012 create=True mode=0644 path=/tmp/ansible.gvo9zkfa state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:22 np0005539482 podman[131811]: 2025-11-29 05:16:22.581002974 +0000 UTC m=+0.056826971 container create f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:16:22 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:16:22 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:16:22 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:16:22 np0005539482 systemd[1]: Started libpod-conmon-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope.
Nov 29 00:16:22 np0005539482 podman[131811]: 2025-11-29 05:16:22.560000802 +0000 UTC m=+0.035824899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:16:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:16:22 np0005539482 podman[131811]: 2025-11-29 05:16:22.693900518 +0000 UTC m=+0.169724625 container init f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 00:16:22 np0005539482 podman[131811]: 2025-11-29 05:16:22.70569162 +0000 UTC m=+0.181515627 container start f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:16:22 np0005539482 podman[131811]: 2025-11-29 05:16:22.709506642 +0000 UTC m=+0.185330739 container attach f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:16:22 np0005539482 infallible_cannon[131851]: 167 167
Nov 29 00:16:22 np0005539482 systemd[1]: libpod-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope: Deactivated successfully.
Nov 29 00:16:22 np0005539482 conmon[131851]: conmon f8c6bebe7f7347303974 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope/container/memory.events
Nov 29 00:16:22 np0005539482 podman[131868]: 2025-11-29 05:16:22.779023167 +0000 UTC m=+0.043193766 container died f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:16:22 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f97b1989deec770e30f5a1fafbb09b84b55d85639e46c740ab9a66ecd718a9b0-merged.mount: Deactivated successfully.
Nov 29 00:16:22 np0005539482 podman[131868]: 2025-11-29 05:16:22.833685345 +0000 UTC m=+0.097855904 container remove f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:16:22 np0005539482 systemd[1]: libpod-conmon-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope: Deactivated successfully.
Nov 29 00:16:23 np0005539482 podman[131930]: 2025-11-29 05:16:23.055303952 +0000 UTC m=+0.068863440 container create dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:16:23 np0005539482 systemd[1]: Started libpod-conmon-dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8.scope.
Nov 29 00:16:23 np0005539482 podman[131930]: 2025-11-29 05:16:23.026370219 +0000 UTC m=+0.039929767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:16:23 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:16:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:23 np0005539482 podman[131930]: 2025-11-29 05:16:23.155109151 +0000 UTC m=+0.168668639 container init dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:16:23 np0005539482 podman[131930]: 2025-11-29 05:16:23.163976944 +0000 UTC m=+0.177536402 container start dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:16:23 np0005539482 podman[131930]: 2025-11-29 05:16:23.171222008 +0000 UTC m=+0.184781556 container attach dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:16:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:23 np0005539482 python3.9[132027]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gvo9zkfa' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:16:24 np0005539482 busy_dubinsky[131965]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:16:24 np0005539482 busy_dubinsky[131965]: --> relative data size: 1.0
Nov 29 00:16:24 np0005539482 busy_dubinsky[131965]: --> All data devices are unavailable
Nov 29 00:16:24 np0005539482 systemd[1]: libpod-dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8.scope: Deactivated successfully.
Nov 29 00:16:24 np0005539482 podman[131930]: 2025-11-29 05:16:24.193062275 +0000 UTC m=+1.206621723 container died dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:16:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay-106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523-merged.mount: Deactivated successfully.
Nov 29 00:16:24 np0005539482 podman[131930]: 2025-11-29 05:16:24.245583952 +0000 UTC m=+1.259143390 container remove dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:16:24 np0005539482 systemd[1]: libpod-conmon-dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8.scope: Deactivated successfully.
Nov 29 00:16:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:24 np0005539482 python3.9[132205]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gvo9zkfa state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:24 np0005539482 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 00:16:24 np0005539482 systemd[1]: session-40.scope: Consumed 5.798s CPU time.
Nov 29 00:16:24 np0005539482 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Nov 29 00:16:24 np0005539482 systemd-logind[793]: Removed session 40.
Nov 29 00:16:24 np0005539482 podman[132382]: 2025-11-29 05:16:24.967826826 +0000 UTC m=+0.063923202 container create 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:16:25 np0005539482 systemd[1]: Started libpod-conmon-7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa.scope.
Nov 29 00:16:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:16:25 np0005539482 podman[132382]: 2025-11-29 05:16:24.941777002 +0000 UTC m=+0.037873408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:16:25 np0005539482 podman[132382]: 2025-11-29 05:16:25.054185255 +0000 UTC m=+0.150281691 container init 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:16:25 np0005539482 podman[132382]: 2025-11-29 05:16:25.066604692 +0000 UTC m=+0.162701028 container start 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:16:25 np0005539482 podman[132382]: 2025-11-29 05:16:25.069221124 +0000 UTC m=+0.165317490 container attach 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:16:25 np0005539482 amazing_elion[132398]: 167 167
Nov 29 00:16:25 np0005539482 systemd[1]: libpod-7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa.scope: Deactivated successfully.
Nov 29 00:16:25 np0005539482 podman[132382]: 2025-11-29 05:16:25.07446823 +0000 UTC m=+0.170564566 container died 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:16:25 np0005539482 systemd[1]: var-lib-containers-storage-overlay-2bbc31f96143717c4f6680d11eaebe786b9ae25459dded49e8e988c678f4c58e-merged.mount: Deactivated successfully.
Nov 29 00:16:25 np0005539482 podman[132382]: 2025-11-29 05:16:25.11410795 +0000 UTC m=+0.210204276 container remove 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:16:25 np0005539482 systemd[1]: libpod-conmon-7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa.scope: Deactivated successfully.
Nov 29 00:16:25 np0005539482 podman[132422]: 2025-11-29 05:16:25.334164939 +0000 UTC m=+0.053732709 container create fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:16:25 np0005539482 systemd[1]: Started libpod-conmon-fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738.scope.
Nov 29 00:16:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:16:25 np0005539482 podman[132422]: 2025-11-29 05:16:25.311462854 +0000 UTC m=+0.031030684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:16:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:25 np0005539482 podman[132422]: 2025-11-29 05:16:25.421682404 +0000 UTC m=+0.141250204 container init fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:16:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:25 np0005539482 podman[132422]: 2025-11-29 05:16:25.435050634 +0000 UTC m=+0.154618414 container start fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:16:25 np0005539482 podman[132422]: 2025-11-29 05:16:25.438513047 +0000 UTC m=+0.158080877 container attach fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]: {
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:    "0": [
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:        {
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "devices": [
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "/dev/loop3"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            ],
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_name": "ceph_lv0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_size": "21470642176",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "name": "ceph_lv0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "tags": {
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cluster_name": "ceph",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.crush_device_class": "",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.encrypted": "0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osd_id": "0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.type": "block",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.vdo": "0"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            },
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "type": "block",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "vg_name": "ceph_vg0"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:        }
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:    ],
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:    "1": [
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:        {
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "devices": [
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "/dev/loop4"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            ],
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_name": "ceph_lv1",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_size": "21470642176",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "name": "ceph_lv1",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "tags": {
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cluster_name": "ceph",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.crush_device_class": "",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.encrypted": "0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osd_id": "1",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.type": "block",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.vdo": "0"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            },
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "type": "block",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "vg_name": "ceph_vg1"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:        }
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:    ],
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:    "2": [
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:        {
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "devices": [
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "/dev/loop5"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            ],
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_name": "ceph_lv2",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_size": "21470642176",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "name": "ceph_lv2",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "tags": {
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.cluster_name": "ceph",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.crush_device_class": "",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.encrypted": "0",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osd_id": "2",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.type": "block",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:                "ceph.vdo": "0"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            },
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "type": "block",
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:            "vg_name": "ceph_vg2"
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:        }
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]:    ]
Nov 29 00:16:26 np0005539482 quirky_brattain[132439]: }
Nov 29 00:16:26 np0005539482 systemd[1]: libpod-fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738.scope: Deactivated successfully.
Nov 29 00:16:26 np0005539482 podman[132422]: 2025-11-29 05:16:26.218589565 +0000 UTC m=+0.938157415 container died fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:16:26 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88-merged.mount: Deactivated successfully.
Nov 29 00:16:26 np0005539482 podman[132422]: 2025-11-29 05:16:26.296145203 +0000 UTC m=+1.015713023 container remove fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 00:16:26 np0005539482 systemd[1]: libpod-conmon-fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738.scope: Deactivated successfully.
Nov 29 00:16:26 np0005539482 podman[132602]: 2025-11-29 05:16:26.928659118 +0000 UTC m=+0.051910044 container create bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:16:26 np0005539482 systemd[1]: Started libpod-conmon-bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82.scope.
Nov 29 00:16:26 np0005539482 podman[132602]: 2025-11-29 05:16:26.900571665 +0000 UTC m=+0.023822681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:16:27 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:16:27 np0005539482 podman[132602]: 2025-11-29 05:16:27.020064147 +0000 UTC m=+0.143315083 container init bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:16:27 np0005539482 podman[132602]: 2025-11-29 05:16:27.031703155 +0000 UTC m=+0.154954061 container start bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:16:27 np0005539482 podman[132602]: 2025-11-29 05:16:27.036837009 +0000 UTC m=+0.160087955 container attach bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:16:27 np0005539482 boring_mclaren[132618]: 167 167
Nov 29 00:16:27 np0005539482 systemd[1]: libpod-bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82.scope: Deactivated successfully.
Nov 29 00:16:27 np0005539482 podman[132602]: 2025-11-29 05:16:27.039539023 +0000 UTC m=+0.162789949 container died bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:16:27 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b0d4fff86a6cf6d1e21242825d2083c2c7dc1dcca8e8ebe28838b3765e4db952-merged.mount: Deactivated successfully.
Nov 29 00:16:27 np0005539482 podman[132602]: 2025-11-29 05:16:27.084304925 +0000 UTC m=+0.207555851 container remove bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:16:27 np0005539482 systemd[1]: libpod-conmon-bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82.scope: Deactivated successfully.
Nov 29 00:16:27 np0005539482 podman[132644]: 2025-11-29 05:16:27.318724868 +0000 UTC m=+0.061177766 container create b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:16:27 np0005539482 systemd[1]: Started libpod-conmon-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope.
Nov 29 00:16:27 np0005539482 podman[132644]: 2025-11-29 05:16:27.291242099 +0000 UTC m=+0.033695057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:16:27 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:16:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:27 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:16:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:27 np0005539482 podman[132644]: 2025-11-29 05:16:27.437576034 +0000 UTC m=+0.180028962 container init b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:16:27 np0005539482 podman[132644]: 2025-11-29 05:16:27.448784232 +0000 UTC m=+0.191237160 container start b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:16:27 np0005539482 podman[132644]: 2025-11-29 05:16:27.453470714 +0000 UTC m=+0.195923642 container attach b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]: {
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "osd_id": 0,
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "type": "bluestore"
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:    },
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "osd_id": 1,
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "type": "bluestore"
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:    },
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "osd_id": 2,
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:        "type": "bluestore"
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]:    }
Nov 29 00:16:28 np0005539482 adoring_torvalds[132661]: }
Nov 29 00:16:28 np0005539482 systemd[1]: libpod-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope: Deactivated successfully.
Nov 29 00:16:28 np0005539482 podman[132644]: 2025-11-29 05:16:28.491197952 +0000 UTC m=+1.233650880 container died b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:16:28 np0005539482 systemd[1]: libpod-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope: Consumed 1.049s CPU time.
Nov 29 00:16:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff-merged.mount: Deactivated successfully.
Nov 29 00:16:28 np0005539482 podman[132644]: 2025-11-29 05:16:28.554521119 +0000 UTC m=+1.296974027 container remove b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:16:28 np0005539482 systemd[1]: libpod-conmon-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope: Deactivated successfully.
Nov 29 00:16:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:16:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:16:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:16:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:16:28 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 1e934f82-d5e1-45a1-868b-903e9770c140 does not exist
Nov 29 00:16:28 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 8f7576f4-edd1-4755-ad40-5c2085928517 does not exist
Nov 29 00:16:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:29 np0005539482 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 00:16:29 np0005539482 systemd[1]: session-17.scope: Consumed 1min 25.967s CPU time.
Nov 29 00:16:29 np0005539482 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Nov 29 00:16:29 np0005539482 systemd-logind[793]: Removed session 17.
Nov 29 00:16:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:16:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:16:30 np0005539482 systemd-logind[793]: New session 41 of user zuul.
Nov 29 00:16:30 np0005539482 systemd[1]: Started Session 41 of User zuul.
Nov 29 00:16:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:31 np0005539482 python3.9[132908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:16:33 np0005539482 python3.9[133064]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 00:16:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:34 np0005539482 python3.9[133218]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:16:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:35 np0005539482 python3.9[133371]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:16:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:36 np0005539482 python3.9[133524]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:16:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:37 np0005539482 python3.9[133676]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:16:37 np0005539482 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Nov 29 00:16:37 np0005539482 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 00:16:37 np0005539482 systemd[1]: session-41.scope: Consumed 4.589s CPU time.
Nov 29 00:16:37 np0005539482 systemd-logind[793]: Removed session 41.
Nov 29 00:16:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:16:41
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', '.rgw.root', 'backups', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:16:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:43 np0005539482 systemd-logind[793]: New session 42 of user zuul.
Nov 29 00:16:43 np0005539482 systemd[1]: Started Session 42 of User zuul.
Nov 29 00:16:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:44 np0005539482 python3.9[133855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:16:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:45 np0005539482 python3.9[134011]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:16:46 np0005539482 python3.9[134095]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 00:16:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:48 np0005539482 python3.9[134246]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:16:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:50 np0005539482 python3.9[134397]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:16:51 np0005539482 python3.9[134547]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:16:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:52 np0005539482 python3.9[134698]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:16:52 np0005539482 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 00:16:52 np0005539482 systemd[1]: session-42.scope: Consumed 6.452s CPU time.
Nov 29 00:16:52 np0005539482 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Nov 29 00:16:52 np0005539482 systemd-logind[793]: Removed session 42.
Nov 29 00:16:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:16:57 np0005539482 systemd-logind[793]: New session 43 of user zuul.
Nov 29 00:16:57 np0005539482 systemd[1]: Started Session 43 of User zuul.
Nov 29 00:16:59 np0005539482 python3.9[134877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:16:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:16:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:00 np0005539482 python3.9[135033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:01 np0005539482 python3.9[135185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:02 np0005539482 python3.9[135337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:03 np0005539482 python3.9[135460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393421.8743083-65-75474468439612/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d964e29446a15bf219d1f39a0bcf7adda320f9e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:04 np0005539482 python3.9[135612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:04 np0005539482 python3.9[135735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393423.549431-65-180969507387554/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=077366a36a0310a88727ebecf6959a48ad4186c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:05 np0005539482 python3.9[135887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:06 np0005539482 python3.9[136010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393424.9073458-65-120006318862557/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3c2b03cad198e356b4c3ecd33d00b02843b0c2f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:06 np0005539482 python3.9[136162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:07 np0005539482 python3.9[136314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:08 np0005539482 python3.9[136466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:09 np0005539482 python3.9[136589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393427.972592-124-52884175136741/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0d6d35d117547aaf5ddee29a6d0a529d82aeb93b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:10 np0005539482 python3.9[136741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:10 np0005539482 python3.9[136864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393429.5179088-124-240443419867000/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=eca369a90e5944c1c3ae7c2351662e846dddb3e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:11 np0005539482 python3.9[137016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:17:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:17:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:17:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:17:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:17:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:17:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:11 np0005539482 python3.9[137139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393430.7847896-124-258844760852483/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0deb4dacf3bc7bb1197ae21aac4c45bcb95c3d1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:12 np0005539482 python3.9[137291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:13 np0005539482 python3.9[137443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:14 np0005539482 python3.9[137595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:15 np0005539482 python3.9[137718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393433.9195518-183-1233429827651/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d6414211c7944ea45bbfc0b627e51d384577f8d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:15 np0005539482 python3.9[137871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:16 np0005539482 python3.9[137994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393435.2714267-183-244045972723734/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=eca369a90e5944c1c3ae7c2351662e846dddb3e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:17 np0005539482 python3.9[138147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:18 np0005539482 python3.9[138270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393436.7076762-183-180650248438107/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=81b2a509a9ed127899e3697de7de1afb4726a4d9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:19 np0005539482 python3.9[138422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:20 np0005539482 python3.9[138574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:20 np0005539482 python3.9[138697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393439.690746-251-52716413697474/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:21 np0005539482 python3.9[138849]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:22 np0005539482 python3.9[139001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:23 np0005539482 python3.9[139124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393441.9956808-275-179464019831335/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:24 np0005539482 python3.9[139276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:24 np0005539482 python3.9[139428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:25 np0005539482 python3.9[139551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393444.3369088-299-270532558142390/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:26 np0005539482 python3.9[139703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:26 np0005539482 python3.9[139855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:27 np0005539482 python3.9[139978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393446.4727738-323-8113773757378/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:28 np0005539482 python3.9[140130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:29 np0005539482 python3.9[140380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:17:29 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 097ac8b0-e33e-489d-8d9b-551fb465aa05 does not exist
Nov 29 00:17:29 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a2c247fc-09fa-47c6-8c2b-8072c2bd1db1 does not exist
Nov 29 00:17:29 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ec36492f-78a7-4608-94e7-a405138dfc61 does not exist
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:17:29 np0005539482 python3.9[140585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393448.745382-347-20400533105468/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:17:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:17:30 np0005539482 podman[140728]: 2025-11-29 05:17:30.159409441 +0000 UTC m=+0.057386720 container create 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:17:30 np0005539482 systemd[1]: Started libpod-conmon-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope.
Nov 29 00:17:30 np0005539482 podman[140728]: 2025-11-29 05:17:30.131543517 +0000 UTC m=+0.029520866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:17:30 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:17:30 np0005539482 podman[140728]: 2025-11-29 05:17:30.247297948 +0000 UTC m=+0.145275227 container init 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:17:30 np0005539482 podman[140728]: 2025-11-29 05:17:30.255426337 +0000 UTC m=+0.153403596 container start 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:17:30 np0005539482 podman[140728]: 2025-11-29 05:17:30.258707098 +0000 UTC m=+0.156684357 container attach 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:17:30 np0005539482 angry_hawking[140789]: 167 167
Nov 29 00:17:30 np0005539482 systemd[1]: libpod-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope: Deactivated successfully.
Nov 29 00:17:30 np0005539482 conmon[140789]: conmon 14fa43378914b27f08e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope/container/memory.events
Nov 29 00:17:30 np0005539482 podman[140728]: 2025-11-29 05:17:30.262924562 +0000 UTC m=+0.160901861 container died 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:17:30 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4e6c8dcafc9e38f7d6579a6f3933d89eff61be7810bd8db79900c460d90ae2bf-merged.mount: Deactivated successfully.
Nov 29 00:17:30 np0005539482 podman[140728]: 2025-11-29 05:17:30.303954318 +0000 UTC m=+0.201931587 container remove 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:17:30 np0005539482 systemd[1]: libpod-conmon-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope: Deactivated successfully.
Nov 29 00:17:30 np0005539482 podman[140865]: 2025-11-29 05:17:30.458322227 +0000 UTC m=+0.042392152 container create c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:17:30 np0005539482 systemd[1]: Started libpod-conmon-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope.
Nov 29 00:17:30 np0005539482 podman[140865]: 2025-11-29 05:17:30.436986613 +0000 UTC m=+0.021056538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:17:30 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:17:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:30 np0005539482 podman[140865]: 2025-11-29 05:17:30.582652448 +0000 UTC m=+0.166722463 container init c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:17:30 np0005539482 podman[140865]: 2025-11-29 05:17:30.593949255 +0000 UTC m=+0.178019150 container start c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:17:30 np0005539482 python3.9[140859]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:30 np0005539482 podman[140865]: 2025-11-29 05:17:30.597758669 +0000 UTC m=+0.181828594 container attach c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:17:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:31 np0005539482 python3.9[141042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:31 np0005539482 affectionate_lehmann[140883]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:17:31 np0005539482 affectionate_lehmann[140883]: --> relative data size: 1.0
Nov 29 00:17:31 np0005539482 affectionate_lehmann[140883]: --> All data devices are unavailable
Nov 29 00:17:31 np0005539482 systemd[1]: libpod-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope: Deactivated successfully.
Nov 29 00:17:31 np0005539482 podman[140865]: 2025-11-29 05:17:31.71135604 +0000 UTC m=+1.295425935 container died c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 00:17:31 np0005539482 systemd[1]: libpod-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope: Consumed 1.059s CPU time.
Nov 29 00:17:31 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a-merged.mount: Deactivated successfully.
Nov 29 00:17:31 np0005539482 podman[140865]: 2025-11-29 05:17:31.794055889 +0000 UTC m=+1.378125784 container remove c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:17:31 np0005539482 systemd[1]: libpod-conmon-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope: Deactivated successfully.
Nov 29 00:17:32 np0005539482 python3.9[141231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393450.841335-371-43642366881700/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:32 np0005539482 podman[141363]: 2025-11-29 05:17:32.541426803 +0000 UTC m=+0.061931591 container create e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:17:32 np0005539482 systemd[1]: Started libpod-conmon-e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e.scope.
Nov 29 00:17:32 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:17:32 np0005539482 podman[141363]: 2025-11-29 05:17:32.516317916 +0000 UTC m=+0.036822744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:17:32 np0005539482 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 00:17:32 np0005539482 systemd[1]: session-43.scope: Consumed 25.979s CPU time.
Nov 29 00:17:32 np0005539482 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Nov 29 00:17:32 np0005539482 systemd-logind[793]: Removed session 43.
Nov 29 00:17:32 np0005539482 podman[141363]: 2025-11-29 05:17:32.623433446 +0000 UTC m=+0.143938234 container init e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:17:32 np0005539482 podman[141363]: 2025-11-29 05:17:32.634389154 +0000 UTC m=+0.154893922 container start e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:17:32 np0005539482 podman[141363]: 2025-11-29 05:17:32.63786681 +0000 UTC m=+0.158371578 container attach e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:17:32 np0005539482 intelligent_black[141379]: 167 167
Nov 29 00:17:32 np0005539482 systemd[1]: libpod-e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e.scope: Deactivated successfully.
Nov 29 00:17:32 np0005539482 podman[141363]: 2025-11-29 05:17:32.640787441 +0000 UTC m=+0.161292209 container died e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:17:32 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5e498a32eaa1e46640c522d96675f346660d511cab7fd69e0883ca80adcd1731-merged.mount: Deactivated successfully.
Nov 29 00:17:32 np0005539482 podman[141363]: 2025-11-29 05:17:32.676518699 +0000 UTC m=+0.197023467 container remove e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 00:17:32 np0005539482 systemd[1]: libpod-conmon-e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e.scope: Deactivated successfully.
Nov 29 00:17:32 np0005539482 podman[141404]: 2025-11-29 05:17:32.830766214 +0000 UTC m=+0.044914303 container create 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:17:32 np0005539482 systemd[1]: Started libpod-conmon-028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85.scope.
Nov 29 00:17:32 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:17:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:32 np0005539482 podman[141404]: 2025-11-29 05:17:32.905779225 +0000 UTC m=+0.119927354 container init 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:17:32 np0005539482 podman[141404]: 2025-11-29 05:17:32.81101927 +0000 UTC m=+0.025167389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:17:32 np0005539482 podman[141404]: 2025-11-29 05:17:32.914371065 +0000 UTC m=+0.128519164 container start 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:17:32 np0005539482 podman[141404]: 2025-11-29 05:17:32.91781811 +0000 UTC m=+0.131966219 container attach 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:17:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]: {
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:    "0": [
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:        {
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "devices": [
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "/dev/loop3"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            ],
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_name": "ceph_lv0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_size": "21470642176",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "name": "ceph_lv0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "tags": {
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cluster_name": "ceph",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.crush_device_class": "",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.encrypted": "0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osd_id": "0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.type": "block",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.vdo": "0"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            },
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "type": "block",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "vg_name": "ceph_vg0"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:        }
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:    ],
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:    "1": [
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:        {
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "devices": [
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "/dev/loop4"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            ],
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_name": "ceph_lv1",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_size": "21470642176",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "name": "ceph_lv1",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "tags": {
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cluster_name": "ceph",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.crush_device_class": "",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.encrypted": "0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osd_id": "1",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.type": "block",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.vdo": "0"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            },
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "type": "block",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "vg_name": "ceph_vg1"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:        }
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:    ],
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:    "2": [
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:        {
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "devices": [
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "/dev/loop5"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            ],
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_name": "ceph_lv2",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_size": "21470642176",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "name": "ceph_lv2",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "tags": {
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.cluster_name": "ceph",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.crush_device_class": "",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.encrypted": "0",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osd_id": "2",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.type": "block",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:                "ceph.vdo": "0"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            },
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "type": "block",
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:            "vg_name": "ceph_vg2"
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:        }
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]:    ]
Nov 29 00:17:33 np0005539482 angry_blackburn[141420]: }
Nov 29 00:17:33 np0005539482 systemd[1]: libpod-028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85.scope: Deactivated successfully.
Nov 29 00:17:33 np0005539482 podman[141404]: 2025-11-29 05:17:33.622000493 +0000 UTC m=+0.836148612 container died 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:17:33 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316-merged.mount: Deactivated successfully.
Nov 29 00:17:33 np0005539482 podman[141404]: 2025-11-29 05:17:33.691305164 +0000 UTC m=+0.905453283 container remove 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:17:33 np0005539482 systemd[1]: libpod-conmon-028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85.scope: Deactivated successfully.
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.336857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454336927, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 252, "total_data_size": 2421058, "memory_usage": 2459112, "flush_reason": "Manual Compaction"}
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454350497, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1412791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7330, "largest_seqno": 9009, "table_properties": {"data_size": 1407233, "index_size": 2506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16028, "raw_average_key_size": 20, "raw_value_size": 1394152, "raw_average_value_size": 1803, "num_data_blocks": 118, "num_entries": 773, "num_filter_entries": 773, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393294, "oldest_key_time": 1764393294, "file_creation_time": 1764393454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13689 microseconds, and 5372 cpu microseconds.
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.350552) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1412791 bytes OK
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.350574) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.352509) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.352531) EVENT_LOG_v1 {"time_micros": 1764393454352523, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.352551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2413582, prev total WAL file size 2413582, number of live WAL files 2.
Nov 29 00:17:34 np0005539482 podman[141584]: 2025-11-29 05:17:34.352802519 +0000 UTC m=+0.047800964 container create 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.353856) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1379KB)], [20(7305KB)]
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454353918, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8893809, "oldest_snapshot_seqno": -1}
Nov 29 00:17:34 np0005539482 systemd[1]: Started libpod-conmon-4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3.scope.
Nov 29 00:17:34 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3384 keys, 6952343 bytes, temperature: kUnknown
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454420656, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6952343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6926422, "index_size": 16340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 80945, "raw_average_key_size": 23, "raw_value_size": 6861983, "raw_average_value_size": 2027, "num_data_blocks": 725, "num_entries": 3384, "num_filter_entries": 3384, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:17:34 np0005539482 podman[141584]: 2025-11-29 05:17:34.327249282 +0000 UTC m=+0.022247797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.420891) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6952343 bytes
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.422170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.1 rd, 104.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3826, records dropped: 442 output_compression: NoCompression
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.422194) EVENT_LOG_v1 {"time_micros": 1764393454422183, "job": 6, "event": "compaction_finished", "compaction_time_micros": 66796, "compaction_time_cpu_micros": 29807, "output_level": 6, "num_output_files": 1, "total_output_size": 6952343, "num_input_records": 3826, "num_output_records": 3384, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454422672, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454424463, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.353750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:17:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:17:34 np0005539482 podman[141584]: 2025-11-29 05:17:34.436168705 +0000 UTC m=+0.131167240 container init 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 00:17:34 np0005539482 podman[141584]: 2025-11-29 05:17:34.44366831 +0000 UTC m=+0.138666785 container start 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:17:34 np0005539482 distracted_bose[141600]: 167 167
Nov 29 00:17:34 np0005539482 podman[141584]: 2025-11-29 05:17:34.447306708 +0000 UTC m=+0.142305183 container attach 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:17:34 np0005539482 systemd[1]: libpod-4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3.scope: Deactivated successfully.
Nov 29 00:17:34 np0005539482 podman[141584]: 2025-11-29 05:17:34.448901437 +0000 UTC m=+0.143899902 container died 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:17:34 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ab125ec6bfd18c1dc451185488961f5c999b0da71214b5f8d1f83afdf416c250-merged.mount: Deactivated successfully.
Nov 29 00:17:34 np0005539482 podman[141584]: 2025-11-29 05:17:34.485531267 +0000 UTC m=+0.180529732 container remove 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:17:34 np0005539482 systemd[1]: libpod-conmon-4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3.scope: Deactivated successfully.
Nov 29 00:17:34 np0005539482 podman[141624]: 2025-11-29 05:17:34.721104949 +0000 UTC m=+0.069070597 container create e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:17:34 np0005539482 systemd[1]: Started libpod-conmon-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope.
Nov 29 00:17:34 np0005539482 podman[141624]: 2025-11-29 05:17:34.691009389 +0000 UTC m=+0.038975127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:17:34 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:17:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:17:34 np0005539482 podman[141624]: 2025-11-29 05:17:34.824519157 +0000 UTC m=+0.172484835 container init e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:17:34 np0005539482 podman[141624]: 2025-11-29 05:17:34.84096874 +0000 UTC m=+0.188934428 container start e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:17:34 np0005539482 podman[141624]: 2025-11-29 05:17:34.845516712 +0000 UTC m=+0.193482350 container attach e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:17:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]: {
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "osd_id": 0,
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "type": "bluestore"
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:    },
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "osd_id": 1,
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "type": "bluestore"
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:    },
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "osd_id": 2,
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:        "type": "bluestore"
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]:    }
Nov 29 00:17:35 np0005539482 friendly_jemison[141640]: }
Nov 29 00:17:35 np0005539482 systemd[1]: libpod-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope: Deactivated successfully.
Nov 29 00:17:35 np0005539482 systemd[1]: libpod-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope: Consumed 1.078s CPU time.
Nov 29 00:17:35 np0005539482 podman[141673]: 2025-11-29 05:17:35.958541509 +0000 UTC m=+0.029076875 container died e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:17:35 np0005539482 systemd[1]: var-lib-containers-storage-overlay-17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2-merged.mount: Deactivated successfully.
Nov 29 00:17:36 np0005539482 podman[141673]: 2025-11-29 05:17:36.013352514 +0000 UTC m=+0.083887860 container remove e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:17:36 np0005539482 systemd[1]: libpod-conmon-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope: Deactivated successfully.
Nov 29 00:17:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:17:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:17:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:17:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:17:36 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0f0c2f49-f302-4084-98ee-f2aa0549138c does not exist
Nov 29 00:17:36 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 57150611-c6a5-4886-b05c-dae9c942acbe does not exist
Nov 29 00:17:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:17:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:17:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:38 np0005539482 systemd-logind[793]: New session 44 of user zuul.
Nov 29 00:17:38 np0005539482 systemd[1]: Started Session 44 of User zuul.
Nov 29 00:17:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:39 np0005539482 python3.9[141893]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:40 np0005539482 python3.9[142045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:17:41
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'backups', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:17:41 np0005539482 python3.9[142168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393460.0237696-34-203625331488709/.source.conf _original_basename=ceph.conf follow=False checksum=f36dbb4697f374c5e3f0472993712ce777bfe2a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:17:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:41 np0005539482 python3.9[142320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:17:42 np0005539482 python3.9[142443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393461.489297-34-40046112781311/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=1cc9e4eb20e7af3f1c9d65ee54a3a3ef5b88c5e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:17:43 np0005539482 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 00:17:43 np0005539482 systemd[1]: session-44.scope: Consumed 2.700s CPU time.
Nov 29 00:17:43 np0005539482 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Nov 29 00:17:43 np0005539482 systemd-logind[793]: Removed session 44.
Nov 29 00:17:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:48 np0005539482 systemd-logind[793]: New session 45 of user zuul.
Nov 29 00:17:48 np0005539482 systemd[1]: Started Session 45 of User zuul.
Nov 29 00:17:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:49 np0005539482 python3.9[142621]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:17:50 np0005539482 python3.9[142777]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:17:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:51 np0005539482 python3.9[142929]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:17:52 np0005539482 python3.9[143079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:17:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:53 np0005539482 python3.9[143231]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 00:17:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:17:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2033 writes, 9029 keys, 2033 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2033 writes, 2033 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2033 writes, 9029 keys, 2033 commit groups, 1.0 writes per commit group, ingest: 11.43 MB, 0.02 MB/s#012Interval WAL: 2033 writes, 2033 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    110.6      0.08              0.03         3    0.026       0      0       0.0       0.0#012  L6      1/0    6.63 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    128.7    113.2      0.12              0.05         2    0.061    7168    731       0.0       0.0#012 Sum      1/0    6.63 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     78.8    112.1      0.20              0.08         5    0.040    7168    731       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     79.7    113.2      0.20              0.08         4    0.049    7168    731       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    128.7    113.2      0.12              0.05         2    0.061    7168    731       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    113.2      0.07              0.03         2    0.037       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 308.00 MB usage: 554.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,467.97 KB,0.148377%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.16 KB,0.0187564%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 00:17:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:56 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 00:17:56 np0005539482 python3.9[143388]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:17:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:57 np0005539482 python3.9[143472]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:17:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:17:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:17:59 np0005539482 python3.9[143627]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:18:00 np0005539482 python3[143782]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 00:18:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:01 np0005539482 python3.9[143934]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:02 np0005539482 python3.9[144086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:03 np0005539482 python3.9[144164]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:04 np0005539482 python3.9[144316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:04 np0005539482 python3.9[144394]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.q7xayevk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:05 np0005539482 python3.9[144546]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:06 np0005539482 python3.9[144624]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:06 np0005539482 python3.9[144776]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:18:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:08 np0005539482 python3[144929]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 00:18:09 np0005539482 python3.9[145081]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:09 np0005539482 python3.9[145206]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393488.4726708-157-205592488769979/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:10 np0005539482 python3.9[145358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:11 np0005539482 python3.9[145483]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393490.1617022-172-256688585185865/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:18:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:18:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:18:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:18:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:18:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:18:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:12 np0005539482 python3.9[145635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:12 np0005539482 python3.9[145760]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393491.5893674-187-155179837342542/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:13 np0005539482 python3.9[145912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:14 np0005539482 python3.9[146037]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393493.1124704-202-264766161544286/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:14 np0005539482 python3.9[146189]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:15 np0005539482 python3.9[146314]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393494.3104706-217-75169874709429/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:16 np0005539482 python3.9[146466]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:17 np0005539482 python3.9[146618]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:18:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:18 np0005539482 python3.9[146773]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:19 np0005539482 python3.9[146925]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:18:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:19 np0005539482 python3.9[147078]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:18:20 np0005539482 python3.9[147232]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:18:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:21 np0005539482 python3.9[147387]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:22 np0005539482 python3.9[147537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:18:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:23 np0005539482 python3.9[147692]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:18:23 np0005539482 ovs-vsctl[147693]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 00:18:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:24 np0005539482 python3.9[147845]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:18:25 np0005539482 python3.9[148000]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:18:25 np0005539482 ovs-vsctl[148001]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 00:18:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:25 np0005539482 python3.9[148151]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:18:26 np0005539482 python3.9[148305]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:18:27 np0005539482 python3.9[148457]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:27 np0005539482 python3.9[148535]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:18:28 np0005539482 python3.9[148687]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:29 np0005539482 python3.9[148765]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:18:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:29 np0005539482 python3.9[148917]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:30 np0005539482 python3.9[149069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:30 np0005539482 python3.9[149147]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:31 np0005539482 python3.9[149299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:32 np0005539482 python3.9[149377]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.709082) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512709193, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 687, "num_deletes": 251, "total_data_size": 864928, "memory_usage": 877680, "flush_reason": "Manual Compaction"}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512719212, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 857440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9010, "largest_seqno": 9696, "table_properties": {"data_size": 853828, "index_size": 1456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7670, "raw_average_key_size": 18, "raw_value_size": 846678, "raw_average_value_size": 2025, "num_data_blocks": 67, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393454, "oldest_key_time": 1764393454, "file_creation_time": 1764393512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10249 microseconds, and 6324 cpu microseconds.
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.719340) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 857440 bytes OK
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.719370) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721132) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721161) EVENT_LOG_v1 {"time_micros": 1764393512721151, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 861356, prev total WAL file size 861356, number of live WAL files 2.
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(837KB)], [23(6789KB)]
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512722017, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7809783, "oldest_snapshot_seqno": -1}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3289 keys, 6072600 bytes, temperature: kUnknown
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512764109, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6072600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6048762, "index_size": 14513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79741, "raw_average_key_size": 24, "raw_value_size": 5987396, "raw_average_value_size": 1820, "num_data_blocks": 633, "num_entries": 3289, "num_filter_entries": 3289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.764562) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6072600 bytes
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.765596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.1 rd, 143.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(16.2) write-amplify(7.1) OK, records in: 3802, records dropped: 513 output_compression: NoCompression
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.765613) EVENT_LOG_v1 {"time_micros": 1764393512765604, "job": 8, "event": "compaction_finished", "compaction_time_micros": 42414, "compaction_time_cpu_micros": 13563, "output_level": 6, "num_output_files": 1, "total_output_size": 6072600, "num_input_records": 3802, "num_output_records": 3289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512765987, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512767323, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:18:32 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:18:33 np0005539482 python3.9[149529]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:18:33 np0005539482 systemd[1]: Reloading.
Nov 29 00:18:33 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:18:33 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:18:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:34 np0005539482 python3.9[149719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:34 np0005539482 python3.9[149797]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:35 np0005539482 python3.9[149949]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:36 np0005539482 python3.9[150027]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:37 np0005539482 python3.9[150279]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:18:37 np0005539482 systemd[1]: Reloading.
Nov 29 00:18:37 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:18:37 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:18:37 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ac7bbc08-296f-434c-901e-a58713aa0beb does not exist
Nov 29 00:18:37 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev bb3e150f-20d7-44cb-8d3d-14a9f5e696f8 does not exist
Nov 29 00:18:37 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e5047bf7-0b2c-40d9-9ed8-82d0e261e5a1 does not exist
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:18:37 np0005539482 systemd[1]: Starting Create netns directory...
Nov 29 00:18:37 np0005539482 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 00:18:37 np0005539482 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 00:18:37 np0005539482 systemd[1]: Finished Create netns directory.
Nov 29 00:18:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:18:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:18:37 np0005539482 podman[150571]: 2025-11-29 05:18:37.855979693 +0000 UTC m=+0.057591629 container create ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:18:37 np0005539482 systemd[1]: Started libpod-conmon-ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33.scope.
Nov 29 00:18:37 np0005539482 podman[150571]: 2025-11-29 05:18:37.836878271 +0000 UTC m=+0.038490227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:18:37 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:18:37 np0005539482 podman[150571]: 2025-11-29 05:18:37.953596807 +0000 UTC m=+0.155208783 container init ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:18:37 np0005539482 podman[150571]: 2025-11-29 05:18:37.962341906 +0000 UTC m=+0.163953832 container start ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:18:37 np0005539482 optimistic_bartik[150630]: 167 167
Nov 29 00:18:37 np0005539482 systemd[1]: libpod-ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33.scope: Deactivated successfully.
Nov 29 00:18:37 np0005539482 podman[150571]: 2025-11-29 05:18:37.970311691 +0000 UTC m=+0.171923657 container attach ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:18:37 np0005539482 podman[150571]: 2025-11-29 05:18:37.97089833 +0000 UTC m=+0.172510266 container died ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:18:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6aeda8a49386d030689c75b424084b5f9bbb1e5ba91061b535cb6811b8c2f69b-merged.mount: Deactivated successfully.
Nov 29 00:18:38 np0005539482 podman[150571]: 2025-11-29 05:18:38.013794611 +0000 UTC m=+0.215406537 container remove ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:18:38 np0005539482 systemd[1]: libpod-conmon-ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33.scope: Deactivated successfully.
Nov 29 00:18:38 np0005539482 podman[150686]: 2025-11-29 05:18:38.1492986 +0000 UTC m=+0.041391663 container create 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:18:38 np0005539482 systemd[1]: Started libpod-conmon-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope.
Nov 29 00:18:38 np0005539482 python3.9[150680]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:18:38 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:18:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:38 np0005539482 podman[150686]: 2025-11-29 05:18:38.132077039 +0000 UTC m=+0.024170122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:18:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:38 np0005539482 podman[150686]: 2025-11-29 05:18:38.254462623 +0000 UTC m=+0.146555706 container init 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:18:38 np0005539482 podman[150686]: 2025-11-29 05:18:38.273118971 +0000 UTC m=+0.165212034 container start 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:18:38 np0005539482 podman[150686]: 2025-11-29 05:18:38.277078092 +0000 UTC m=+0.169171165 container attach 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:18:39 np0005539482 python3.9[150859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:39 np0005539482 adoring_ptolemy[150703]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:18:39 np0005539482 adoring_ptolemy[150703]: --> relative data size: 1.0
Nov 29 00:18:39 np0005539482 adoring_ptolemy[150703]: --> All data devices are unavailable
Nov 29 00:18:39 np0005539482 systemd[1]: libpod-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope: Deactivated successfully.
Nov 29 00:18:39 np0005539482 systemd[1]: libpod-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope: Consumed 1.084s CPU time.
Nov 29 00:18:39 np0005539482 podman[150686]: 2025-11-29 05:18:39.432543076 +0000 UTC m=+1.324636209 container died 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:18:39 np0005539482 systemd[1]: var-lib-containers-storage-overlay-38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11-merged.mount: Deactivated successfully.
Nov 29 00:18:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:39 np0005539482 podman[150686]: 2025-11-29 05:18:39.511982958 +0000 UTC m=+1.404076031 container remove 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:18:39 np0005539482 systemd[1]: libpod-conmon-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope: Deactivated successfully.
Nov 29 00:18:39 np0005539482 python3.9[151020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393518.4482346-468-115034954856512/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:18:40 np0005539482 podman[151212]: 2025-11-29 05:18:40.352703657 +0000 UTC m=+0.074025943 container create 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:18:40 np0005539482 systemd[1]: Started libpod-conmon-68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18.scope.
Nov 29 00:18:40 np0005539482 podman[151212]: 2025-11-29 05:18:40.317133248 +0000 UTC m=+0.038455574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:18:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:18:40 np0005539482 podman[151212]: 2025-11-29 05:18:40.446845985 +0000 UTC m=+0.168168341 container init 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:18:40 np0005539482 podman[151212]: 2025-11-29 05:18:40.454153057 +0000 UTC m=+0.175475303 container start 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:18:40 np0005539482 podman[151212]: 2025-11-29 05:18:40.457393804 +0000 UTC m=+0.178716090 container attach 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:18:40 np0005539482 nervous_moser[151276]: 167 167
Nov 29 00:18:40 np0005539482 systemd[1]: libpod-68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18.scope: Deactivated successfully.
Nov 29 00:18:40 np0005539482 podman[151212]: 2025-11-29 05:18:40.463157305 +0000 UTC m=+0.184479591 container died 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:18:40 np0005539482 systemd[1]: var-lib-containers-storage-overlay-463ea3f5cc693eafa5649770f8a6ef0eac347396a8cd8ce70f652a725353f8e4-merged.mount: Deactivated successfully.
Nov 29 00:18:40 np0005539482 podman[151212]: 2025-11-29 05:18:40.50614995 +0000 UTC m=+0.227472196 container remove 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:18:40 np0005539482 systemd[1]: libpod-conmon-68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18.scope: Deactivated successfully.
Nov 29 00:18:40 np0005539482 podman[151356]: 2025-11-29 05:18:40.730629905 +0000 UTC m=+0.069261075 container create f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:18:40 np0005539482 systemd[1]: Started libpod-conmon-f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4.scope.
Nov 29 00:18:40 np0005539482 python3.9[151350]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:18:40 np0005539482 podman[151356]: 2025-11-29 05:18:40.702874897 +0000 UTC m=+0.041506157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:18:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:18:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:40 np0005539482 podman[151356]: 2025-11-29 05:18:40.851884302 +0000 UTC m=+0.190515552 container init f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:18:40 np0005539482 podman[151356]: 2025-11-29 05:18:40.864687256 +0000 UTC m=+0.203318456 container start f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:18:40 np0005539482 podman[151356]: 2025-11-29 05:18:40.868346228 +0000 UTC m=+0.206977498 container attach f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:18:41
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'volumes']
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:18:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:41 np0005539482 python3.9[151529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]: {
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:    "0": [
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:        {
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "devices": [
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "/dev/loop3"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            ],
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_name": "ceph_lv0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_size": "21470642176",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "name": "ceph_lv0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "tags": {
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cluster_name": "ceph",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.crush_device_class": "",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.encrypted": "0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osd_id": "0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.type": "block",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.vdo": "0"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            },
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "type": "block",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "vg_name": "ceph_vg0"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:        }
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:    ],
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:    "1": [
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:        {
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "devices": [
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "/dev/loop4"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            ],
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_name": "ceph_lv1",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_size": "21470642176",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "name": "ceph_lv1",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "tags": {
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cluster_name": "ceph",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.crush_device_class": "",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.encrypted": "0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osd_id": "1",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.type": "block",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.vdo": "0"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            },
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "type": "block",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "vg_name": "ceph_vg1"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:        }
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:    ],
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:    "2": [
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:        {
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "devices": [
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "/dev/loop5"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            ],
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_name": "ceph_lv2",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_size": "21470642176",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "name": "ceph_lv2",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "tags": {
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.cluster_name": "ceph",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.crush_device_class": "",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.encrypted": "0",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osd_id": "2",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.type": "block",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:                "ceph.vdo": "0"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            },
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "type": "block",
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:            "vg_name": "ceph_vg2"
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:        }
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]:    ]
Nov 29 00:18:41 np0005539482 nervous_gauss[151373]: }
Nov 29 00:18:41 np0005539482 systemd[1]: libpod-f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4.scope: Deactivated successfully.
Nov 29 00:18:41 np0005539482 podman[151356]: 2025-11-29 05:18:41.655685568 +0000 UTC m=+0.994316738 container died f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:18:41 np0005539482 systemd[1]: var-lib-containers-storage-overlay-8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9-merged.mount: Deactivated successfully.
Nov 29 00:18:41 np0005539482 podman[151356]: 2025-11-29 05:18:41.705605092 +0000 UTC m=+1.044236252 container remove f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:18:41 np0005539482 systemd[1]: libpod-conmon-f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4.scope: Deactivated successfully.
Nov 29 00:18:42 np0005539482 python3.9[151763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393521.1273267-493-226782149232541/.source.json _original_basename=.3nqn71o2 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:42 np0005539482 podman[151826]: 2025-11-29 05:18:42.243782428 +0000 UTC m=+0.042882801 container create 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:18:42 np0005539482 systemd[1]: Started libpod-conmon-7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e.scope.
Nov 29 00:18:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:18:42 np0005539482 podman[151826]: 2025-11-29 05:18:42.31537797 +0000 UTC m=+0.114478363 container init 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 00:18:42 np0005539482 podman[151826]: 2025-11-29 05:18:42.224156758 +0000 UTC m=+0.023257121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:18:42 np0005539482 podman[151826]: 2025-11-29 05:18:42.32231682 +0000 UTC m=+0.121417163 container start 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:18:42 np0005539482 podman[151826]: 2025-11-29 05:18:42.326138526 +0000 UTC m=+0.125238909 container attach 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:18:42 np0005539482 jolly_gagarin[151847]: 167 167
Nov 29 00:18:42 np0005539482 systemd[1]: libpod-7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e.scope: Deactivated successfully.
Nov 29 00:18:42 np0005539482 podman[151826]: 2025-11-29 05:18:42.3283568 +0000 UTC m=+0.127457153 container died 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:18:42 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f62424a6029c7d74f5ac2880dfdf518504b46d7902125277d1cbd87c510d9245-merged.mount: Deactivated successfully.
Nov 29 00:18:42 np0005539482 podman[151826]: 2025-11-29 05:18:42.361883331 +0000 UTC m=+0.160983684 container remove 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:18:42 np0005539482 systemd[1]: libpod-conmon-7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e.scope: Deactivated successfully.
Nov 29 00:18:42 np0005539482 podman[151946]: 2025-11-29 05:18:42.546494685 +0000 UTC m=+0.051193916 container create bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:18:42 np0005539482 systemd[1]: Started libpod-conmon-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope.
Nov 29 00:18:42 np0005539482 podman[151946]: 2025-11-29 05:18:42.526636718 +0000 UTC m=+0.031335989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:18:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:18:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:42 np0005539482 podman[151946]: 2025-11-29 05:18:42.665171257 +0000 UTC m=+0.169870678 container init bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:18:42 np0005539482 podman[151946]: 2025-11-29 05:18:42.676227503 +0000 UTC m=+0.180926744 container start bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:18:42 np0005539482 podman[151946]: 2025-11-29 05:18:42.681937492 +0000 UTC m=+0.186636703 container attach bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:18:42 np0005539482 python3.9[152019]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]: {
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "osd_id": 0,
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "type": "bluestore"
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:    },
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "osd_id": 1,
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "type": "bluestore"
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:    },
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "osd_id": 2,
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:        "type": "bluestore"
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]:    }
Nov 29 00:18:43 np0005539482 sharp_mclaren[151986]: }
Nov 29 00:18:43 np0005539482 systemd[1]: libpod-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope: Deactivated successfully.
Nov 29 00:18:43 np0005539482 podman[151946]: 2025-11-29 05:18:43.720193225 +0000 UTC m=+1.224892456 container died bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:18:43 np0005539482 systemd[1]: libpod-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope: Consumed 1.046s CPU time.
Nov 29 00:18:43 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db-merged.mount: Deactivated successfully.
Nov 29 00:18:43 np0005539482 podman[151946]: 2025-11-29 05:18:43.794469095 +0000 UTC m=+1.299168286 container remove bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:18:43 np0005539482 systemd[1]: libpod-conmon-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope: Deactivated successfully.
Nov 29 00:18:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:18:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:18:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:18:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:18:43 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev dbce3264-756f-423a-9bc3-c6b60d298ae1 does not exist
Nov 29 00:18:43 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ba05a742-b0bd-4433-ab18-af51642d7e1e does not exist
Nov 29 00:18:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:18:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:18:45 np0005539482 python3.9[152538]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 00:18:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:46 np0005539482 python3.9[152690]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 00:18:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:47 np0005539482 python3.9[152842]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 00:18:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:49 np0005539482 python3[153020]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:18:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:54 np0005539482 podman[153033]: 2025-11-29 05:18:54.293975651 +0000 UTC m=+4.664847993 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 00:18:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:54 np0005539482 podman[153154]: 2025-11-29 05:18:54.43583164 +0000 UTC m=+0.061132826 container create 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 00:18:54 np0005539482 podman[153154]: 2025-11-29 05:18:54.395578386 +0000 UTC m=+0.020879652 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 00:18:54 np0005539482 python3[153020]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 00:18:55 np0005539482 python3.9[153344]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:18:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:56 np0005539482 python3.9[153500]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:56 np0005539482 python3.9[153576]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:18:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:57 np0005539482 python3.9[153727]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393536.9099998-581-232314051282663/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:18:58 np0005539482 python3.9[153803]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:18:58 np0005539482 systemd[1]: Reloading.
Nov 29 00:18:58 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:18:58 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:18:59 np0005539482 python3.9[153913]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:18:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:18:59 np0005539482 systemd[1]: Reloading.
Nov 29 00:18:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:18:59 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:18:59 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:18:59 np0005539482 systemd[1]: Starting ovn_controller container...
Nov 29 00:18:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:18:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d2e56174bec8578d838178f3d2f095f95316cff89a2a6012a0d38c75eb2b65/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 00:18:59 np0005539482 systemd[1]: Started /usr/bin/podman healthcheck run 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53.
Nov 29 00:18:59 np0005539482 podman[153954]: 2025-11-29 05:18:59.896735713 +0000 UTC m=+0.139743002 container init 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 00:18:59 np0005539482 ovn_controller[153970]: + sudo -E kolla_set_configs
Nov 29 00:18:59 np0005539482 podman[153954]: 2025-11-29 05:18:59.932792194 +0000 UTC m=+0.175799453 container start 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:18:59 np0005539482 edpm-start-podman-container[153954]: ovn_controller
Nov 29 00:18:59 np0005539482 systemd[1]: Created slice User Slice of UID 0.
Nov 29 00:18:59 np0005539482 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 00:18:59 np0005539482 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 00:18:59 np0005539482 systemd[1]: Starting User Manager for UID 0...
Nov 29 00:19:00 np0005539482 edpm-start-podman-container[153953]: Creating additional drop-in dependency for "ovn_controller" (7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53)
Nov 29 00:19:00 np0005539482 podman[153976]: 2025-11-29 05:19:00.035609813 +0000 UTC m=+0.084717283 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:19:00 np0005539482 systemd[1]: 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53-42826d82be438151.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 00:19:00 np0005539482 systemd[1]: 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53-42826d82be438151.service: Failed with result 'exit-code'.
Nov 29 00:19:00 np0005539482 systemd[1]: Reloading.
Nov 29 00:19:00 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:19:00 np0005539482 systemd[153995]: Queued start job for default target Main User Target.
Nov 29 00:19:00 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:19:00 np0005539482 systemd[153995]: Created slice User Application Slice.
Nov 29 00:19:00 np0005539482 systemd[153995]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 00:19:00 np0005539482 systemd[153995]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 00:19:00 np0005539482 systemd[153995]: Reached target Paths.
Nov 29 00:19:00 np0005539482 systemd[153995]: Reached target Timers.
Nov 29 00:19:00 np0005539482 systemd[153995]: Starting D-Bus User Message Bus Socket...
Nov 29 00:19:00 np0005539482 systemd[153995]: Starting Create User's Volatile Files and Directories...
Nov 29 00:19:00 np0005539482 systemd[153995]: Listening on D-Bus User Message Bus Socket.
Nov 29 00:19:00 np0005539482 systemd[153995]: Reached target Sockets.
Nov 29 00:19:00 np0005539482 systemd[153995]: Finished Create User's Volatile Files and Directories.
Nov 29 00:19:00 np0005539482 systemd[153995]: Reached target Basic System.
Nov 29 00:19:00 np0005539482 systemd[153995]: Reached target Main User Target.
Nov 29 00:19:00 np0005539482 systemd[153995]: Startup finished in 151ms.
Nov 29 00:19:00 np0005539482 systemd[1]: Started User Manager for UID 0.
Nov 29 00:19:00 np0005539482 systemd[1]: Started Session c1 of User root.
Nov 29 00:19:00 np0005539482 systemd[1]: Started ovn_controller container.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: INFO:__main__:Validating config file
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: INFO:__main__:Writing out command to execute
Nov 29 00:19:00 np0005539482 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: ++ cat /run_command
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + ARGS=
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + sudo kolla_copy_cacerts
Nov 29 00:19:00 np0005539482 systemd[1]: Started Session c2 of User root.
Nov 29 00:19:00 np0005539482 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + [[ ! -n '' ]]
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + . kolla_extend_start
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + umask 0022
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5151] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5163] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5179] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5187] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5194] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 00:19:00 np0005539482 kernel: br-int: entered promiscuous mode
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 00:19:00 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:00Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5393] manager: (ovn-1193e5-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 00:19:00 np0005539482 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 00:19:00 np0005539482 systemd-udevd[154105]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:19:00 np0005539482 systemd-udevd[154106]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5650] device (genev_sys_6081): carrier: link connected
Nov 29 00:19:00 np0005539482 NetworkManager[49073]: <info>  [1764393540.5656] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 00:19:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:01 np0005539482 python3.9[154237]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:19:01 np0005539482 ovs-vsctl[154238]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 00:19:02 np0005539482 python3.9[154390]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:19:02 np0005539482 ovs-vsctl[154392]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 00:19:03 np0005539482 python3.9[154545]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:19:03 np0005539482 ovs-vsctl[154546]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 00:19:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:03 np0005539482 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 00:19:03 np0005539482 systemd[1]: session-45.scope: Consumed 1min 516ms CPU time.
Nov 29 00:19:03 np0005539482 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Nov 29 00:19:03 np0005539482 systemd-logind[793]: Removed session 45.
Nov 29 00:19:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:09 np0005539482 systemd-logind[793]: New session 47 of user zuul.
Nov 29 00:19:09 np0005539482 systemd[1]: Started Session 47 of User zuul.
Nov 29 00:19:10 np0005539482 systemd[1]: Stopping User Manager for UID 0...
Nov 29 00:19:10 np0005539482 systemd[153995]: Activating special unit Exit the Session...
Nov 29 00:19:10 np0005539482 systemd[153995]: Stopped target Main User Target.
Nov 29 00:19:10 np0005539482 systemd[153995]: Stopped target Basic System.
Nov 29 00:19:10 np0005539482 systemd[153995]: Stopped target Paths.
Nov 29 00:19:10 np0005539482 systemd[153995]: Stopped target Sockets.
Nov 29 00:19:10 np0005539482 systemd[153995]: Stopped target Timers.
Nov 29 00:19:10 np0005539482 systemd[153995]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 00:19:10 np0005539482 systemd[153995]: Closed D-Bus User Message Bus Socket.
Nov 29 00:19:10 np0005539482 systemd[153995]: Stopped Create User's Volatile Files and Directories.
Nov 29 00:19:10 np0005539482 systemd[153995]: Removed slice User Application Slice.
Nov 29 00:19:10 np0005539482 systemd[153995]: Reached target Shutdown.
Nov 29 00:19:10 np0005539482 systemd[153995]: Finished Exit the Session.
Nov 29 00:19:10 np0005539482 systemd[153995]: Reached target Exit the Session.
Nov 29 00:19:10 np0005539482 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 00:19:10 np0005539482 systemd[1]: Stopped User Manager for UID 0.
Nov 29 00:19:10 np0005539482 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 00:19:10 np0005539482 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 00:19:10 np0005539482 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 00:19:10 np0005539482 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 00:19:10 np0005539482 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 00:19:10 np0005539482 python3.9[154726]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:19:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:19:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:19:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:19:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:19:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:19:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:19:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:12 np0005539482 python3.9[154882]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:12 np0005539482 python3.9[155034]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:13 np0005539482 python3.9[155186]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:14 np0005539482 python3.9[155338]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:15 np0005539482 python3.9[155490]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:15 np0005539482 python3.9[155640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:19:16 np0005539482 python3.9[155792]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 00:19:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:18 np0005539482 python3.9[155942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:19 np0005539482 python3.9[156064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393557.680408-86-62116215349733/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:19 np0005539482 python3.9[156214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:20 np0005539482 python3.9[156335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393559.3845565-101-76019781011967/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:21 np0005539482 python3.9[156487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:19:22 np0005539482 python3.9[156571]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:19:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:24 np0005539482 python3.9[156724]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:19:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:25 np0005539482 python3.9[156877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:26 np0005539482 python3.9[156998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393565.228068-138-1099252354488/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:26 np0005539482 python3.9[157148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:27 np0005539482 python3.9[157269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393566.3816183-138-171446976670684/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:28 np0005539482 python3.9[157419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:29 np0005539482 python3.9[157540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393568.422645-182-269222877716292/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:30 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:30Z|00025|memory|INFO|17408 kB peak resident set size after 29.9 seconds
Nov 29 00:19:30 np0005539482 ovn_controller[153970]: 2025-11-29T05:19:30Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 00:19:30 np0005539482 podman[157664]: 2025-11-29 05:19:30.383119757 +0000 UTC m=+0.106061175 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 00:19:30 np0005539482 python3.9[157699]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:31 np0005539482 python3.9[157838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393569.789758-182-189574762258839/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:31 np0005539482 python3.9[157988]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:19:32 np0005539482 python3.9[158142]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:33 np0005539482 python3.9[158294]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:33 np0005539482 python3.9[158372]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:34 np0005539482 python3.9[158524]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:35 np0005539482 python3.9[158602]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:35 np0005539482 python3.9[158754]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:19:36 np0005539482 python3.9[158906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:37 np0005539482 python3.9[158984]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:19:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:37 np0005539482 python3.9[159136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:19:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5568 writes, 24K keys, 5568 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5568 writes, 870 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5568 writes, 24K keys, 5568 commit groups, 1.0 writes per commit group, ingest: 18.63 MB, 0.03 MB/s#012Interval WAL: 5568 writes, 870 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 00:19:38 np0005539482 python3.9[159214]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:19:39 np0005539482 python3.9[159366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:19:39 np0005539482 systemd[1]: Reloading.
Nov 29 00:19:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:39 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:19:39 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:19:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:40 np0005539482 python3.9[159554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:40 np0005539482 python3.9[159632]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:19:41
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:19:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:41 np0005539482 python3.9[159784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:42 np0005539482 python3.9[159862]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:19:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:19:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 6875 writes, 28K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6875 writes, 1210 syncs, 5.68 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6875 writes, 28K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 19.64 MB, 0.03 MB/s#012Interval WAL: 6875 writes, 1210 syncs, 5.68 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Nov 29 00:19:43 np0005539482 python3.9[160014]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:19:43 np0005539482 systemd[1]: Reloading.
Nov 29 00:19:43 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:19:43 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:19:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:43 np0005539482 systemd[1]: Starting Create netns directory...
Nov 29 00:19:43 np0005539482 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 00:19:43 np0005539482 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 00:19:43 np0005539482 systemd[1]: Finished Create netns directory.
Nov 29 00:19:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:44 np0005539482 python3.9[160307]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:19:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:19:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:45 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d9f466a6-ab2c-4a2b-95c5-82afea8a2723 does not exist
Nov 29 00:19:45 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6f945ae9-4ed7-4f7c-8ea3-1b559a334e33 does not exist
Nov 29 00:19:45 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 84eeb6d8-a70f-4323-89bb-5f47baa46c0f does not exist
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:19:45 np0005539482 python3.9[160598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:19:45 np0005539482 podman[160876]: 2025-11-29 05:19:45.858079726 +0000 UTC m=+0.044302391 container create 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:19:45 np0005539482 python3.9[160840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393584.7334507-333-189093488402961/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:45 np0005539482 systemd[1]: Started libpod-conmon-9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514.scope.
Nov 29 00:19:45 np0005539482 podman[160876]: 2025-11-29 05:19:45.835417217 +0000 UTC m=+0.021639902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:19:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:19:45 np0005539482 podman[160876]: 2025-11-29 05:19:45.950707105 +0000 UTC m=+0.136929740 container init 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:19:45 np0005539482 podman[160876]: 2025-11-29 05:19:45.965163927 +0000 UTC m=+0.151386562 container start 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:19:45 np0005539482 podman[160876]: 2025-11-29 05:19:45.968546816 +0000 UTC m=+0.154769471 container attach 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:19:45 np0005539482 upbeat_elion[160893]: 167 167
Nov 29 00:19:45 np0005539482 systemd[1]: libpod-9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514.scope: Deactivated successfully.
Nov 29 00:19:45 np0005539482 podman[160876]: 2025-11-29 05:19:45.971589247 +0000 UTC m=+0.157811892 container died 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:19:45 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b1d92a3fe710eefb39590a09c32e1588e26ae5584f8a083693ad9be77e411c4c-merged.mount: Deactivated successfully.
Nov 29 00:19:46 np0005539482 podman[160876]: 2025-11-29 05:19:46.008959545 +0000 UTC m=+0.195182190 container remove 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:19:46 np0005539482 systemd[1]: libpod-conmon-9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514.scope: Deactivated successfully.
Nov 29 00:19:46 np0005539482 podman[160940]: 2025-11-29 05:19:46.192938348 +0000 UTC m=+0.055385675 container create 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:19:46 np0005539482 systemd[1]: Started libpod-conmon-6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638.scope.
Nov 29 00:19:46 np0005539482 podman[160940]: 2025-11-29 05:19:46.165139394 +0000 UTC m=+0.027586821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:19:46 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:19:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:46 np0005539482 podman[160940]: 2025-11-29 05:19:46.303705117 +0000 UTC m=+0.166152494 container init 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:19:46 np0005539482 podman[160940]: 2025-11-29 05:19:46.325160333 +0000 UTC m=+0.187607670 container start 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:19:46 np0005539482 podman[160940]: 2025-11-29 05:19:46.330500085 +0000 UTC m=+0.192947422 container attach 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:19:46 np0005539482 python3.9[161090]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:19:47 np0005539482 beautiful_brown[160957]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:19:47 np0005539482 beautiful_brown[160957]: --> relative data size: 1.0
Nov 29 00:19:47 np0005539482 beautiful_brown[160957]: --> All data devices are unavailable
Nov 29 00:19:47 np0005539482 systemd[1]: libpod-6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638.scope: Deactivated successfully.
Nov 29 00:19:47 np0005539482 podman[160940]: 2025-11-29 05:19:47.34303491 +0000 UTC m=+1.205482237 container died 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:19:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043-merged.mount: Deactivated successfully.
Nov 29 00:19:47 np0005539482 podman[160940]: 2025-11-29 05:19:47.392046895 +0000 UTC m=+1.254494222 container remove 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 00:19:47 np0005539482 systemd[1]: libpod-conmon-6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638.scope: Deactivated successfully.
Nov 29 00:19:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:47 np0005539482 python3.9[161333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:19:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:19:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 18.29 MB, 0.03 MB/s#012Interval WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 00:19:47 np0005539482 podman[161420]: 2025-11-29 05:19:47.972950661 +0000 UTC m=+0.053175747 container create 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:19:48 np0005539482 systemd[1]: Started libpod-conmon-2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0.scope.
Nov 29 00:19:48 np0005539482 podman[161420]: 2025-11-29 05:19:47.954824831 +0000 UTC m=+0.035049887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:19:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:19:48 np0005539482 podman[161420]: 2025-11-29 05:19:48.076522589 +0000 UTC m=+0.156747735 container init 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:19:48 np0005539482 podman[161420]: 2025-11-29 05:19:48.083642777 +0000 UTC m=+0.163867853 container start 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:19:48 np0005539482 podman[161420]: 2025-11-29 05:19:48.08832172 +0000 UTC m=+0.168546816 container attach 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:19:48 np0005539482 agitated_villani[161483]: 167 167
Nov 29 00:19:48 np0005539482 systemd[1]: libpod-2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0.scope: Deactivated successfully.
Nov 29 00:19:48 np0005539482 podman[161420]: 2025-11-29 05:19:48.092528071 +0000 UTC m=+0.172753157 container died 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:19:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c9903d32b0f0d125944bec1060df53019019714d02cdc8bef79a513a9989e4e1-merged.mount: Deactivated successfully.
Nov 29 00:19:48 np0005539482 podman[161420]: 2025-11-29 05:19:48.138496186 +0000 UTC m=+0.218721272 container remove 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:19:48 np0005539482 systemd[1]: libpod-conmon-2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0.scope: Deactivated successfully.
Nov 29 00:19:48 np0005539482 podman[161581]: 2025-11-29 05:19:48.336136611 +0000 UTC m=+0.052356995 container create 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:19:48 np0005539482 podman[161581]: 2025-11-29 05:19:48.314303364 +0000 UTC m=+0.030523778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:19:48 np0005539482 systemd[1]: Started libpod-conmon-7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a.scope.
Nov 29 00:19:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:19:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:48 np0005539482 podman[161581]: 2025-11-29 05:19:48.463916598 +0000 UTC m=+0.180137002 container init 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:19:48 np0005539482 podman[161581]: 2025-11-29 05:19:48.470664317 +0000 UTC m=+0.186884711 container start 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:19:48 np0005539482 podman[161581]: 2025-11-29 05:19:48.473887003 +0000 UTC m=+0.190107397 container attach 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:19:48 np0005539482 python3.9[161583]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393587.234265-358-175979518254990/.source.json _original_basename=.swln47tk follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:19:49 np0005539482 python3.9[161755]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:19:49 np0005539482 cool_hawking[161599]: {
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:    "0": [
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:        {
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "devices": [
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "/dev/loop3"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            ],
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_name": "ceph_lv0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_size": "21470642176",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "name": "ceph_lv0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "tags": {
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cluster_name": "ceph",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.crush_device_class": "",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.encrypted": "0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osd_id": "0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.type": "block",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.vdo": "0"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            },
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "type": "block",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "vg_name": "ceph_vg0"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:        }
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:    ],
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:    "1": [
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:        {
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "devices": [
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "/dev/loop4"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            ],
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_name": "ceph_lv1",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_size": "21470642176",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "name": "ceph_lv1",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "tags": {
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cluster_name": "ceph",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.crush_device_class": "",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.encrypted": "0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osd_id": "1",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.type": "block",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.vdo": "0"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            },
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "type": "block",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "vg_name": "ceph_vg1"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:        }
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:    ],
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:    "2": [
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:        {
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "devices": [
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "/dev/loop5"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            ],
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_name": "ceph_lv2",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_size": "21470642176",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "name": "ceph_lv2",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "tags": {
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.cluster_name": "ceph",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.crush_device_class": "",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.encrypted": "0",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osd_id": "2",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.type": "block",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:                "ceph.vdo": "0"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            },
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "type": "block",
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:            "vg_name": "ceph_vg2"
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:        }
Nov 29 00:19:49 np0005539482 cool_hawking[161599]:    ]
Nov 29 00:19:49 np0005539482 cool_hawking[161599]: }
Nov 29 00:19:49 np0005539482 systemd[1]: libpod-7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a.scope: Deactivated successfully.
Nov 29 00:19:49 np0005539482 podman[161581]: 2025-11-29 05:19:49.257089035 +0000 UTC m=+0.973309469 container died 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:19:49 np0005539482 systemd[1]: var-lib-containers-storage-overlay-16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13-merged.mount: Deactivated successfully.
Nov 29 00:19:49 np0005539482 podman[161581]: 2025-11-29 05:19:49.332788726 +0000 UTC m=+1.049009120 container remove 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:19:49 np0005539482 systemd[1]: libpod-conmon-7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a.scope: Deactivated successfully.
Nov 29 00:19:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:50 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 00:19:50 np0005539482 podman[162092]: 2025-11-29 05:19:50.17693753 +0000 UTC m=+0.057132701 container create 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:19:50 np0005539482 systemd[1]: Started libpod-conmon-3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619.scope.
Nov 29 00:19:50 np0005539482 podman[162092]: 2025-11-29 05:19:50.158183654 +0000 UTC m=+0.038378855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:19:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:19:50 np0005539482 podman[162092]: 2025-11-29 05:19:50.2836284 +0000 UTC m=+0.163823661 container init 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 00:19:50 np0005539482 podman[162092]: 2025-11-29 05:19:50.290360168 +0000 UTC m=+0.170555359 container start 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 00:19:50 np0005539482 podman[162092]: 2025-11-29 05:19:50.29458387 +0000 UTC m=+0.174779061 container attach 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:19:50 np0005539482 vigorous_khayyam[162153]: 167 167
Nov 29 00:19:50 np0005539482 systemd[1]: libpod-3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619.scope: Deactivated successfully.
Nov 29 00:19:50 np0005539482 podman[162092]: 2025-11-29 05:19:50.296076889 +0000 UTC m=+0.176272080 container died 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:19:50 np0005539482 systemd[1]: var-lib-containers-storage-overlay-593f2b7871581a1b790f186ab0e1a6684b1fc8653ff154a9c03f085c1f2d01ac-merged.mount: Deactivated successfully.
Nov 29 00:19:50 np0005539482 podman[162092]: 2025-11-29 05:19:50.341628873 +0000 UTC m=+0.221824064 container remove 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:19:50 np0005539482 systemd[1]: libpod-conmon-3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619.scope: Deactivated successfully.
Nov 29 00:19:50 np0005539482 podman[162233]: 2025-11-29 05:19:50.531924073 +0000 UTC m=+0.058972210 container create 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:19:50 np0005539482 systemd[1]: Started libpod-conmon-989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2.scope.
Nov 29 00:19:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:19:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:19:50 np0005539482 podman[162233]: 2025-11-29 05:19:50.509957843 +0000 UTC m=+0.037005980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:19:50 np0005539482 podman[162233]: 2025-11-29 05:19:50.604727458 +0000 UTC m=+0.131775615 container init 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:19:50 np0005539482 podman[162233]: 2025-11-29 05:19:50.620380192 +0000 UTC m=+0.147428299 container start 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:19:50 np0005539482 podman[162233]: 2025-11-29 05:19:50.623931905 +0000 UTC m=+0.150980022 container attach 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:51 np0005539482 objective_tharp[162250]: {
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "osd_id": 0,
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "type": "bluestore"
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:    },
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "osd_id": 1,
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "type": "bluestore"
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:    },
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "osd_id": 2,
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:        "type": "bluestore"
Nov 29 00:19:51 np0005539482 objective_tharp[162250]:    }
Nov 29 00:19:51 np0005539482 objective_tharp[162250]: }
Nov 29 00:19:51 np0005539482 systemd[1]: libpod-989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2.scope: Deactivated successfully.
Nov 29 00:19:51 np0005539482 podman[162233]: 2025-11-29 05:19:51.59672998 +0000 UTC m=+1.123778077 container died 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:19:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c-merged.mount: Deactivated successfully.
Nov 29 00:19:51 np0005539482 podman[162233]: 2025-11-29 05:19:51.661494082 +0000 UTC m=+1.188542189 container remove 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 00:19:51 np0005539482 python3.9[162423]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 00:19:51 np0005539482 systemd[1]: libpod-conmon-989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2.scope: Deactivated successfully.
Nov 29 00:19:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:19:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:19:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 3881b643-1e8b-442f-9b3f-ce629860a544 does not exist
Nov 29 00:19:51 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 4ae5a2d9-3876-4d7c-9f2a-564aac206b00 does not exist
Nov 29 00:19:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:19:52 np0005539482 python3.9[162650]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 00:19:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:54 np0005539482 python3.9[162802]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 00:19:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:55 np0005539482 python3[162981]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 00:19:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:19:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:19:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:02 np0005539482 podman[163060]: 2025-11-29 05:20:02.325404508 +0000 UTC m=+1.380331828 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 00:20:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:05 np0005539482 podman[162993]: 2025-11-29 05:20:05.397692211 +0000 UTC m=+9.727755362 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 00:20:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:05 np0005539482 podman[163155]: 2025-11-29 05:20:05.539647333 +0000 UTC m=+0.025352681 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 00:20:05 np0005539482 podman[163155]: 2025-11-29 05:20:05.685068678 +0000 UTC m=+0.170774016 container create 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 00:20:05 np0005539482 python3[162981]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 00:20:06 np0005539482 python3.9[163345]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:20:07 np0005539482 python3.9[163499]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:07 np0005539482 python3.9[163575]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:20:08 np0005539482 python3.9[163726]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393607.8428926-446-135762990385855/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:09 np0005539482 python3.9[163802]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:20:09 np0005539482 systemd[1]: Reloading.
Nov 29 00:20:09 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:20:09 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:20:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:10 np0005539482 python3.9[163912]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:11 np0005539482 systemd[1]: Reloading.
Nov 29 00:20:11 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:20:11 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:20:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:20:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:20:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:20:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:20:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:20:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:20:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:11 np0005539482 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 00:20:11 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:20:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398c4609a306f444849b2deffb49598961a5888b15151fc3ead216c4ea0f6244/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398c4609a306f444849b2deffb49598961a5888b15151fc3ead216c4ea0f6244/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:11 np0005539482 systemd[1]: Started /usr/bin/podman healthcheck run 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209.
Nov 29 00:20:11 np0005539482 podman[163953]: 2025-11-29 05:20:11.758841429 +0000 UTC m=+0.146842152 container init 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + sudo -E kolla_set_configs
Nov 29 00:20:11 np0005539482 podman[163953]: 2025-11-29 05:20:11.809376405 +0000 UTC m=+0.197377098 container start 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:20:11 np0005539482 edpm-start-podman-container[163953]: ovn_metadata_agent
Nov 29 00:20:11 np0005539482 podman[163975]: 2025-11-29 05:20:11.879422393 +0000 UTC m=+0.058664927 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Validating config file
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 00:20:11 np0005539482 edpm-start-podman-container[163952]: Creating additional drop-in dependency for "ovn_metadata_agent" (5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209)
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Copying service configuration files
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Writing out command to execute
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: ++ cat /run_command
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + CMD=neutron-ovn-metadata-agent
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + ARGS=
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + sudo kolla_copy_cacerts
Nov 29 00:20:11 np0005539482 systemd[1]: Reloading.
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + [[ ! -n '' ]]
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + . kolla_extend_start
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + umask 0022
Nov 29 00:20:11 np0005539482 ovn_metadata_agent[163968]: + exec neutron-ovn-metadata-agent
Nov 29 00:20:11 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:20:12 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:20:12 np0005539482 systemd[1]: Started ovn_metadata_agent container.
Nov 29 00:20:12 np0005539482 systemd[1]: session-47.scope: Deactivated successfully.
Nov 29 00:20:12 np0005539482 systemd[1]: session-47.scope: Consumed 56.494s CPU time.
Nov 29 00:20:12 np0005539482 systemd-logind[793]: Session 47 logged out. Waiting for processes to exit.
Nov 29 00:20:12 np0005539482 systemd-logind[793]: Removed session 47.
Nov 29 00:20:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.692 163973 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.692 163973 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.739 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.739 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.740 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.740 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.740 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.754 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 63cfe9d2-e938-418d-9401-5d1a600b4ede (UUID: 63cfe9d2-e938-418d-9401-5d1a600b4ede) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.781 163973 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.782 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.782 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.782 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.784 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.791 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.796 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '63cfe9d2-e938-418d-9401-5d1a600b4ede'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f08f06f3e80>], external_ids={}, name=63cfe9d2-e938-418d-9401-5d1a600b4ede, nb_cfg_timestamp=1764393548543, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.797 163973 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f08f06f6b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.798 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.798 163973 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.799 163973 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.799 163973 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.803 163973 DEBUG oslo_service.service [-] Started child 164082 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.806 163973 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmps4tl4zy4/privsep.sock']#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.808 164082 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1022618'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.844 164082 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.845 164082 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.845 164082 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.850 164082 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.860 164082 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 00:20:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.869 164082 INFO eventlet.wsgi.server [-] (164082) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 00:20:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:14 np0005539482 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.496 163973 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.497 163973 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps4tl4zy4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.369 164087 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.377 164087 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 00:20:14 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.386 164087 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.387 164087 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164087#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.502 164087 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf6b835-d36d-44df-baa0-f1c0329f554c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.989 164087 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.989 164087 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:20:14 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.989 164087 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:20:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.540 164087 DEBUG oslo.privsep.daemon [-] privsep: reply[ef714392-a3f0-43a3-b011-b3db1bee63d6]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.542 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, column=external_ids, values=({'neutron:ovn-metadata-id': '44af6163-09e8-5582-b53f-e0fe312da172'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.551 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.558 163973 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.562 163973 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.562 163973 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.562 163973 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:20:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 00:20:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:18 np0005539482 systemd-logind[793]: New session 48 of user zuul.
Nov 29 00:20:18 np0005539482 systemd[1]: Started Session 48 of User zuul.
Nov 29 00:20:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:19 np0005539482 python3.9[164246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:20:20 np0005539482 python3.9[164402]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:22 np0005539482 python3.9[164567]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:20:22 np0005539482 systemd[1]: Reloading.
Nov 29 00:20:22 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:20:22 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:20:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:23 np0005539482 python3.9[164751]: ansible-ansible.builtin.service_facts Invoked
Nov 29 00:20:23 np0005539482 network[164768]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 00:20:23 np0005539482 network[164769]: 'network-scripts' will be removed from distribution in near future.
Nov 29 00:20:23 np0005539482 network[164770]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 00:20:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:29 np0005539482 python3.9[165033]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:30 np0005539482 python3.9[165186]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:31 np0005539482 python3.9[165339]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:32 np0005539482 python3.9[165492]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:33 np0005539482 python3.9[165645]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:33 np0005539482 podman[165647]: 2025-11-29 05:20:33.347924327 +0000 UTC m=+0.123261830 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 00:20:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:34 np0005539482 python3.9[165824]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:34 np0005539482 python3.9[165977]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:20:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:35 np0005539482 python3.9[166130]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:36 np0005539482 python3.9[166282]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:37 np0005539482 python3.9[166434]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:38 np0005539482 python3.9[166586]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:38 np0005539482 python3.9[166738]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:39 np0005539482 python3.9[166890]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:40 np0005539482 python3.9[167042]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:41 np0005539482 python3.9[167194]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:20:41
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:20:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:41 np0005539482 python3.9[167346]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:42 np0005539482 podman[167377]: 2025-11-29 05:20:42.021078259 +0000 UTC m=+0.077964826 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 00:20:42 np0005539482 python3.9[167519]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:43 np0005539482 python3.9[167671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:43 np0005539482 python3.9[167823]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:44 np0005539482 python3.9[167975]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:45 np0005539482 python3.9[168127]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:20:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:20:46 np0005539482 python3.9[168279]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:47 np0005539482 python3.9[168431]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 00:20:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:20:47 np0005539482 python3.9[168583]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:20:47 np0005539482 systemd[1]: Reloading.
Nov 29 00:20:48 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:20:48 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:20:49 np0005539482 python3.9[168771]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:20:49 np0005539482 python3.9[168924]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:50 np0005539482 python3.9[169077]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:20:51 np0005539482 python3.9[169230]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:20:52 np0005539482 python3.9[169425]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:52 np0005539482 podman[169657]: 2025-11-29 05:20:52.647445765 +0000 UTC m=+0.091990714 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:20:52 np0005539482 podman[169657]: 2025-11-29 05:20:52.733907281 +0000 UTC m=+0.178452200 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:20:52 np0005539482 python3.9[169730]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:20:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:20:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:20:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:20:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:20:53 np0005539482 python3.9[170020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:20:54 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a291ce35-193f-40db-8abb-0a4d82f3531b does not exist
Nov 29 00:20:54 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7d5867d4-556a-4d75-b623-16a4a860d68a does not exist
Nov 29 00:20:54 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 39726ccc-5ae8-42ed-aa0c-f2117e4c58b3 does not exist
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:20:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:20:54 np0005539482 python3.9[170375]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 00:20:54 np0005539482 podman[170491]: 2025-11-29 05:20:54.896569391 +0000 UTC m=+0.053497388 container create 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:20:54 np0005539482 systemd[1]: Started libpod-conmon-36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161.scope.
Nov 29 00:20:54 np0005539482 podman[170491]: 2025-11-29 05:20:54.866078566 +0000 UTC m=+0.023006613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:20:54 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:20:54 np0005539482 podman[170491]: 2025-11-29 05:20:54.995024285 +0000 UTC m=+0.151952332 container init 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:20:55 np0005539482 podman[170491]: 2025-11-29 05:20:55.004245234 +0000 UTC m=+0.161173201 container start 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 00:20:55 np0005539482 podman[170491]: 2025-11-29 05:20:55.007606808 +0000 UTC m=+0.164534795 container attach 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:20:55 np0005539482 serene_wing[170537]: 167 167
Nov 29 00:20:55 np0005539482 systemd[1]: libpod-36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161.scope: Deactivated successfully.
Nov 29 00:20:55 np0005539482 podman[170491]: 2025-11-29 05:20:55.012012827 +0000 UTC m=+0.168940804 container died 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:20:55 np0005539482 systemd[1]: var-lib-containers-storage-overlay-123a45dd65a22c0d1e3c3cff0d31b7baab68c6abfc05a89a1e9c337841d0c7f2-merged.mount: Deactivated successfully.
Nov 29 00:20:55 np0005539482 podman[170491]: 2025-11-29 05:20:55.052686466 +0000 UTC m=+0.209614443 container remove 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:20:55 np0005539482 systemd[1]: libpod-conmon-36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161.scope: Deactivated successfully.
Nov 29 00:20:55 np0005539482 podman[170587]: 2025-11-29 05:20:55.288547689 +0000 UTC m=+0.068112301 container create eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 00:20:55 np0005539482 systemd[1]: Started libpod-conmon-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope.
Nov 29 00:20:55 np0005539482 podman[170587]: 2025-11-29 05:20:55.258509694 +0000 UTC m=+0.038074346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:20:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:20:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:55 np0005539482 podman[170587]: 2025-11-29 05:20:55.429816875 +0000 UTC m=+0.209381537 container init eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:20:55 np0005539482 podman[170587]: 2025-11-29 05:20:55.439114246 +0000 UTC m=+0.218678838 container start eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:20:55 np0005539482 podman[170587]: 2025-11-29 05:20:55.442379537 +0000 UTC m=+0.221944189 container attach eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 00:20:55 np0005539482 python3.9[170656]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 00:20:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:20:56 np0005539482 agitated_nash[170654]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:20:56 np0005539482 agitated_nash[170654]: --> relative data size: 1.0
Nov 29 00:20:56 np0005539482 agitated_nash[170654]: --> All data devices are unavailable
Nov 29 00:20:56 np0005539482 systemd[1]: libpod-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope: Deactivated successfully.
Nov 29 00:20:56 np0005539482 podman[170587]: 2025-11-29 05:20:56.585329832 +0000 UTC m=+1.364894424 container died eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:20:56 np0005539482 systemd[1]: libpod-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope: Consumed 1.074s CPU time.
Nov 29 00:20:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8-merged.mount: Deactivated successfully.
Nov 29 00:20:56 np0005539482 podman[170587]: 2025-11-29 05:20:56.645990917 +0000 UTC m=+1.425555499 container remove eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:20:56 np0005539482 systemd[1]: libpod-conmon-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope: Deactivated successfully.
Nov 29 00:20:56 np0005539482 python3.9[170840]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 00:20:57 np0005539482 podman[171026]: 2025-11-29 05:20:57.198776426 +0000 UTC m=+0.060239976 container create ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:20:57 np0005539482 systemd[1]: Started libpod-conmon-ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f.scope.
Nov 29 00:20:57 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:20:57 np0005539482 podman[171026]: 2025-11-29 05:20:57.176517133 +0000 UTC m=+0.037980693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:20:57 np0005539482 podman[171026]: 2025-11-29 05:20:57.289904307 +0000 UTC m=+0.151367897 container init ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:20:57 np0005539482 podman[171026]: 2025-11-29 05:20:57.296238534 +0000 UTC m=+0.157702074 container start ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:20:57 np0005539482 podman[171026]: 2025-11-29 05:20:57.300344506 +0000 UTC m=+0.161808106 container attach ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:20:57 np0005539482 silly_tesla[171079]: 167 167
Nov 29 00:20:57 np0005539482 systemd[1]: libpod-ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f.scope: Deactivated successfully.
Nov 29 00:20:57 np0005539482 podman[171026]: 2025-11-29 05:20:57.303898175 +0000 UTC m=+0.165361725 container died ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 00:20:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-75f33c0b59e64aed51382710a53996757748cc5eb0ba149d40051e729129cc26-merged.mount: Deactivated successfully.
Nov 29 00:20:57 np0005539482 podman[171026]: 2025-11-29 05:20:57.351994958 +0000 UTC m=+0.213458468 container remove ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:20:57 np0005539482 systemd[1]: libpod-conmon-ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f.scope: Deactivated successfully.
Nov 29 00:20:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:57 np0005539482 podman[171166]: 2025-11-29 05:20:57.541272375 +0000 UTC m=+0.039765397 container create f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:20:57 np0005539482 systemd[1]: Started libpod-conmon-f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e.scope.
Nov 29 00:20:57 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:20:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:57 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:57 np0005539482 podman[171166]: 2025-11-29 05:20:57.524661373 +0000 UTC m=+0.023154415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:20:57 np0005539482 podman[171166]: 2025-11-29 05:20:57.627367982 +0000 UTC m=+0.125861004 container init f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:20:57 np0005539482 podman[171166]: 2025-11-29 05:20:57.636516169 +0000 UTC m=+0.135009181 container start f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:20:57 np0005539482 podman[171166]: 2025-11-29 05:20:57.639589046 +0000 UTC m=+0.138082068 container attach f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:20:57 np0005539482 python3.9[171211]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]: {
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:    "0": [
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:        {
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "devices": [
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "/dev/loop3"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            ],
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_name": "ceph_lv0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_size": "21470642176",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "name": "ceph_lv0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "tags": {
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cluster_name": "ceph",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.crush_device_class": "",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.encrypted": "0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osd_id": "0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.type": "block",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.vdo": "0"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            },
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "type": "block",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "vg_name": "ceph_vg0"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:        }
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:    ],
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:    "1": [
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:        {
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "devices": [
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "/dev/loop4"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            ],
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_name": "ceph_lv1",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_size": "21470642176",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "name": "ceph_lv1",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "tags": {
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cluster_name": "ceph",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.crush_device_class": "",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.encrypted": "0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osd_id": "1",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.type": "block",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.vdo": "0"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            },
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "type": "block",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "vg_name": "ceph_vg1"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:        }
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:    ],
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:    "2": [
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:        {
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "devices": [
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "/dev/loop5"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            ],
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_name": "ceph_lv2",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_size": "21470642176",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "name": "ceph_lv2",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "tags": {
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.cluster_name": "ceph",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.crush_device_class": "",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.encrypted": "0",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osd_id": "2",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.type": "block",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:                "ceph.vdo": "0"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            },
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "type": "block",
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:            "vg_name": "ceph_vg2"
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:        }
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]:    ]
Nov 29 00:20:58 np0005539482 eloquent_lalande[171209]: }
Nov 29 00:20:58 np0005539482 systemd[1]: libpod-f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e.scope: Deactivated successfully.
Nov 29 00:20:58 np0005539482 podman[171227]: 2025-11-29 05:20:58.461279777 +0000 UTC m=+0.026595141 container died f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:20:58 np0005539482 systemd[1]: var-lib-containers-storage-overlay-657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d-merged.mount: Deactivated successfully.
Nov 29 00:20:58 np0005539482 podman[171227]: 2025-11-29 05:20:58.510728345 +0000 UTC m=+0.076043729 container remove f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:20:58 np0005539482 systemd[1]: libpod-conmon-f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e.scope: Deactivated successfully.
Nov 29 00:20:58 np0005539482 python3.9[171367]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:20:59 np0005539482 podman[171461]: 2025-11-29 05:20:59.227769349 +0000 UTC m=+0.057177130 container create 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:20:59 np0005539482 systemd[1]: Started libpod-conmon-6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f.scope.
Nov 29 00:20:59 np0005539482 podman[171461]: 2025-11-29 05:20:59.197644371 +0000 UTC m=+0.027052242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:20:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:20:59 np0005539482 podman[171461]: 2025-11-29 05:20:59.320625723 +0000 UTC m=+0.150033534 container init 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:20:59 np0005539482 podman[171461]: 2025-11-29 05:20:59.331504904 +0000 UTC m=+0.160912725 container start 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:20:59 np0005539482 podman[171461]: 2025-11-29 05:20:59.33501965 +0000 UTC m=+0.164427441 container attach 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:20:59 np0005539482 cranky_meitner[171478]: 167 167
Nov 29 00:20:59 np0005539482 systemd[1]: libpod-6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f.scope: Deactivated successfully.
Nov 29 00:20:59 np0005539482 podman[171461]: 2025-11-29 05:20:59.338836685 +0000 UTC m=+0.168244476 container died 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:20:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:20:59 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b6c5b35713fb8047ddb79fd5d3f93f0a4b04beb8c60d479c572064a8db425fe8-merged.mount: Deactivated successfully.
Nov 29 00:20:59 np0005539482 podman[171461]: 2025-11-29 05:20:59.391918323 +0000 UTC m=+0.221326104 container remove 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:20:59 np0005539482 systemd[1]: libpod-conmon-6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f.scope: Deactivated successfully.
Nov 29 00:20:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:20:59 np0005539482 podman[171501]: 2025-11-29 05:20:59.591340532 +0000 UTC m=+0.041885260 container create acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:20:59 np0005539482 systemd[1]: Started libpod-conmon-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope.
Nov 29 00:20:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:20:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:20:59 np0005539482 podman[171501]: 2025-11-29 05:20:59.576537775 +0000 UTC m=+0.027082523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:20:59 np0005539482 podman[171501]: 2025-11-29 05:20:59.689172779 +0000 UTC m=+0.139717557 container init acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:20:59 np0005539482 podman[171501]: 2025-11-29 05:20:59.697923877 +0000 UTC m=+0.148468645 container start acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:20:59 np0005539482 podman[171501]: 2025-11-29 05:20:59.703888765 +0000 UTC m=+0.154433523 container attach acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]: {
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "osd_id": 0,
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "type": "bluestore"
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:    },
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "osd_id": 1,
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "type": "bluestore"
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:    },
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "osd_id": 2,
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:        "type": "bluestore"
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]:    }
Nov 29 00:21:00 np0005539482 flamboyant_lehmann[171517]: }
Nov 29 00:21:00 np0005539482 systemd[1]: libpod-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope: Deactivated successfully.
Nov 29 00:21:00 np0005539482 systemd[1]: libpod-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope: Consumed 1.117s CPU time.
Nov 29 00:21:00 np0005539482 podman[171554]: 2025-11-29 05:21:00.874187119 +0000 UTC m=+0.044647560 container died acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:21:00 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d-merged.mount: Deactivated successfully.
Nov 29 00:21:00 np0005539482 podman[171554]: 2025-11-29 05:21:00.940128335 +0000 UTC m=+0.110588746 container remove acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:21:00 np0005539482 systemd[1]: libpod-conmon-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope: Deactivated successfully.
Nov 29 00:21:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:21:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:21:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:21:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:21:01 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f03ab662-4ea3-4469-9f2b-02bd4e4ca606 does not exist
Nov 29 00:21:01 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e52773d9-7216-48f3-b64e-6e995ef363af does not exist
Nov 29 00:21:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:21:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:21:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:04 np0005539482 podman[171631]: 2025-11-29 05:21:04.088693585 +0000 UTC m=+0.143821412 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 00:21:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:21:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:21:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:21:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:21:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:21:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:21:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:13 np0005539482 podman[171830]: 2025-11-29 05:21:13.019742225 +0000 UTC m=+0.071562607 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 00:21:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:21:13.730 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:21:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:21:13.731 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:21:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:21:13.731 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:21:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:25 np0005539482 kernel: SELinux:  Converting 2768 SID table entries...
Nov 29 00:21:25 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:21:25 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:21:25 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:21:25 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:21:25 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:21:25 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:21:25 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:21:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:34 np0005539482 kernel: SELinux:  Converting 2768 SID table entries...
Nov 29 00:21:34 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:21:34 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:21:34 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:21:34 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:21:34 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:21:34 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:21:34 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:21:34 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 00:21:35 np0005539482 podman[171875]: 2025-11-29 05:21:35.049038586 +0000 UTC m=+0.092089554 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 00:21:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:21:41
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta']
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:21:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:44 np0005539482 podman[171901]: 2025-11-29 05:21:44.064538348 +0000 UTC m=+0.106849485 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 00:21:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:21:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:21:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:21:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:22:02 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev aeed7bd5-9677-44ac-98e7-a18dff541527 does not exist
Nov 29 00:22:02 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 1f48276e-5066-4a59-a491-501557c76f00 does not exist
Nov 29 00:22:02 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a5e4cbbe-c04b-4d04-b54f-5aea4935df6c does not exist
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:22:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:22:02 np0005539482 podman[180303]: 2025-11-29 05:22:02.793801459 +0000 UTC m=+0.053265510 container create 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:22:02 np0005539482 systemd[1]: Started libpod-conmon-4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89.scope.
Nov 29 00:22:02 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:22:02 np0005539482 podman[180303]: 2025-11-29 05:22:02.767855858 +0000 UTC m=+0.027319989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:22:02 np0005539482 podman[180303]: 2025-11-29 05:22:02.879121994 +0000 UTC m=+0.138586065 container init 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:22:02 np0005539482 podman[180303]: 2025-11-29 05:22:02.885518004 +0000 UTC m=+0.144982055 container start 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:22:02 np0005539482 podman[180303]: 2025-11-29 05:22:02.888806152 +0000 UTC m=+0.148270213 container attach 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:22:02 np0005539482 musing_darwin[180384]: 167 167
Nov 29 00:22:02 np0005539482 systemd[1]: libpod-4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89.scope: Deactivated successfully.
Nov 29 00:22:02 np0005539482 podman[180433]: 2025-11-29 05:22:02.928148936 +0000 UTC m=+0.023603794 container died 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:22:02 np0005539482 systemd[1]: var-lib-containers-storage-overlay-931551e1b7ba0a0bb519beebb07ccffc7473b381d9f0d7b5ae16988c3506077c-merged.mount: Deactivated successfully.
Nov 29 00:22:02 np0005539482 podman[180433]: 2025-11-29 05:22:02.967708155 +0000 UTC m=+0.063163013 container remove 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:22:02 np0005539482 systemd[1]: libpod-conmon-4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89.scope: Deactivated successfully.
Nov 29 00:22:03 np0005539482 podman[180566]: 2025-11-29 05:22:03.143701303 +0000 UTC m=+0.047501892 container create d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:22:03 np0005539482 systemd[1]: Started libpod-conmon-d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a.scope.
Nov 29 00:22:03 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:22:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:03 np0005539482 podman[180566]: 2025-11-29 05:22:03.213586622 +0000 UTC m=+0.117387281 container init d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:22:03 np0005539482 podman[180566]: 2025-11-29 05:22:03.120421947 +0000 UTC m=+0.024222566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:22:03 np0005539482 podman[180566]: 2025-11-29 05:22:03.228548017 +0000 UTC m=+0.132348636 container start d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:22:03 np0005539482 podman[180566]: 2025-11-29 05:22:03.232624501 +0000 UTC m=+0.136425120 container attach d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:22:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:04 np0005539482 nostalgic_beaver[180636]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:22:04 np0005539482 nostalgic_beaver[180636]: --> relative data size: 1.0
Nov 29 00:22:04 np0005539482 nostalgic_beaver[180636]: --> All data devices are unavailable
Nov 29 00:22:04 np0005539482 podman[180566]: 2025-11-29 05:22:04.281126288 +0000 UTC m=+1.184926887 container died d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:22:04 np0005539482 systemd[1]: libpod-d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a.scope: Deactivated successfully.
Nov 29 00:22:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691-merged.mount: Deactivated successfully.
Nov 29 00:22:04 np0005539482 podman[180566]: 2025-11-29 05:22:04.334188253 +0000 UTC m=+1.237988832 container remove d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:22:04 np0005539482 systemd[1]: libpod-conmon-d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a.scope: Deactivated successfully.
Nov 29 00:22:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:05 np0005539482 podman[181812]: 2025-11-29 05:22:05.002140159 +0000 UTC m=+0.091441620 container create 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:22:05 np0005539482 podman[181812]: 2025-11-29 05:22:04.932300231 +0000 UTC m=+0.021601672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:22:05 np0005539482 systemd[1]: Started libpod-conmon-990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1.scope.
Nov 29 00:22:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:22:05 np0005539482 podman[181812]: 2025-11-29 05:22:05.087736759 +0000 UTC m=+0.177038200 container init 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:22:05 np0005539482 podman[181812]: 2025-11-29 05:22:05.098948148 +0000 UTC m=+0.188249579 container start 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 00:22:05 np0005539482 podman[181812]: 2025-11-29 05:22:05.102321857 +0000 UTC m=+0.191623278 container attach 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:22:05 np0005539482 dazzling_mendeleev[181917]: 167 167
Nov 29 00:22:05 np0005539482 systemd[1]: libpod-990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1.scope: Deactivated successfully.
Nov 29 00:22:05 np0005539482 podman[181812]: 2025-11-29 05:22:05.108584075 +0000 UTC m=+0.197885506 container died 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:22:05 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f3bd2428c14f1ab2c9a7ff89acd36600c1014faec3c6d66ac8f7e15047bb78f6-merged.mount: Deactivated successfully.
Nov 29 00:22:05 np0005539482 podman[181812]: 2025-11-29 05:22:05.148522902 +0000 UTC m=+0.237824323 container remove 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:22:05 np0005539482 systemd[1]: libpod-conmon-990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1.scope: Deactivated successfully.
Nov 29 00:22:05 np0005539482 podman[181939]: 2025-11-29 05:22:05.222154377 +0000 UTC m=+0.124308113 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:22:05 np0005539482 podman[182091]: 2025-11-29 05:22:05.320848385 +0000 UTC m=+0.038774454 container create 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 00:22:05 np0005539482 systemd[1]: Started libpod-conmon-21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819.scope.
Nov 29 00:22:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:22:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:05 np0005539482 podman[182091]: 2025-11-29 05:22:05.303773145 +0000 UTC m=+0.021699234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:22:05 np0005539482 podman[182091]: 2025-11-29 05:22:05.408823763 +0000 UTC m=+0.126749852 container init 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:22:05 np0005539482 podman[182091]: 2025-11-29 05:22:05.417569803 +0000 UTC m=+0.135495872 container start 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:22:05 np0005539482 podman[182091]: 2025-11-29 05:22:05.42090294 +0000 UTC m=+0.138829009 container attach 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:22:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]: {
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:    "0": [
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:        {
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "devices": [
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "/dev/loop3"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            ],
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_name": "ceph_lv0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_size": "21470642176",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "name": "ceph_lv0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "tags": {
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cluster_name": "ceph",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.crush_device_class": "",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.encrypted": "0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osd_id": "0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.type": "block",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.vdo": "0"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            },
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "type": "block",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "vg_name": "ceph_vg0"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:        }
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:    ],
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:    "1": [
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:        {
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "devices": [
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "/dev/loop4"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            ],
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_name": "ceph_lv1",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_size": "21470642176",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "name": "ceph_lv1",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "tags": {
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cluster_name": "ceph",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.crush_device_class": "",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.encrypted": "0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osd_id": "1",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.type": "block",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.vdo": "0"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            },
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "type": "block",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "vg_name": "ceph_vg1"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:        }
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:    ],
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:    "2": [
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:        {
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "devices": [
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "/dev/loop5"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            ],
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_name": "ceph_lv2",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_size": "21470642176",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "name": "ceph_lv2",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "tags": {
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.cluster_name": "ceph",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.crush_device_class": "",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.encrypted": "0",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osd_id": "2",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.type": "block",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:                "ceph.vdo": "0"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            },
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "type": "block",
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:            "vg_name": "ceph_vg2"
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:        }
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]:    ]
Nov 29 00:22:06 np0005539482 priceless_ardinghelli[182174]: }
Nov 29 00:22:06 np0005539482 systemd[1]: libpod-21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819.scope: Deactivated successfully.
Nov 29 00:22:06 np0005539482 podman[182091]: 2025-11-29 05:22:06.166250288 +0000 UTC m=+0.884176367 container died 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 00:22:06 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f-merged.mount: Deactivated successfully.
Nov 29 00:22:06 np0005539482 podman[182091]: 2025-11-29 05:22:06.239611519 +0000 UTC m=+0.957537588 container remove 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:22:06 np0005539482 systemd[1]: libpod-conmon-21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819.scope: Deactivated successfully.
Nov 29 00:22:06 np0005539482 podman[183177]: 2025-11-29 05:22:06.935350503 +0000 UTC m=+0.042160352 container create f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:22:06 np0005539482 systemd[1]: Started libpod-conmon-f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0.scope.
Nov 29 00:22:07 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:22:07 np0005539482 podman[183177]: 2025-11-29 05:22:06.914397465 +0000 UTC m=+0.021207364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:22:07 np0005539482 podman[183177]: 2025-11-29 05:22:07.02568435 +0000 UTC m=+0.132494299 container init f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:22:07 np0005539482 podman[183177]: 2025-11-29 05:22:07.031913238 +0000 UTC m=+0.138723107 container start f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:22:07 np0005539482 jolly_merkle[183247]: 167 167
Nov 29 00:22:07 np0005539482 systemd[1]: libpod-f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0.scope: Deactivated successfully.
Nov 29 00:22:07 np0005539482 podman[183177]: 2025-11-29 05:22:07.036592863 +0000 UTC m=+0.143402762 container attach f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:22:07 np0005539482 podman[183177]: 2025-11-29 05:22:07.036975481 +0000 UTC m=+0.143785420 container died f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:22:07 np0005539482 systemd[1]: var-lib-containers-storage-overlay-2e6b75451c2a85fb471a1dc7138a518059634e44f9653ffd7b3d8530958e2f63-merged.mount: Deactivated successfully.
Nov 29 00:22:07 np0005539482 podman[183177]: 2025-11-29 05:22:07.085283519 +0000 UTC m=+0.192093378 container remove f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:22:07 np0005539482 systemd[1]: libpod-conmon-f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0.scope: Deactivated successfully.
Nov 29 00:22:07 np0005539482 podman[183366]: 2025-11-29 05:22:07.275418386 +0000 UTC m=+0.049329230 container create 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:22:07 np0005539482 systemd[1]: Started libpod-conmon-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope.
Nov 29 00:22:07 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:22:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:07 np0005539482 podman[183366]: 2025-11-29 05:22:07.255996889 +0000 UTC m=+0.029907773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:22:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:07 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:22:07 np0005539482 podman[183366]: 2025-11-29 05:22:07.363153969 +0000 UTC m=+0.137064893 container init 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:22:07 np0005539482 podman[183366]: 2025-11-29 05:22:07.369132562 +0000 UTC m=+0.143043406 container start 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:22:07 np0005539482 podman[183366]: 2025-11-29 05:22:07.373383489 +0000 UTC m=+0.147294353 container attach 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:22:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]: {
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "osd_id": 0,
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "type": "bluestore"
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:    },
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "osd_id": 1,
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "type": "bluestore"
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:    },
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "osd_id": 2,
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:        "type": "bluestore"
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]:    }
Nov 29 00:22:08 np0005539482 goofy_banzai[183448]: }
Nov 29 00:22:08 np0005539482 systemd[1]: libpod-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope: Deactivated successfully.
Nov 29 00:22:08 np0005539482 podman[183366]: 2025-11-29 05:22:08.423914497 +0000 UTC m=+1.197825361 container died 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:22:08 np0005539482 systemd[1]: libpod-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope: Consumed 1.058s CPU time.
Nov 29 00:22:08 np0005539482 systemd[1]: var-lib-containers-storage-overlay-eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13-merged.mount: Deactivated successfully.
Nov 29 00:22:08 np0005539482 podman[183366]: 2025-11-29 05:22:08.499767108 +0000 UTC m=+1.273677972 container remove 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:22:08 np0005539482 systemd[1]: libpod-conmon-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope: Deactivated successfully.
Nov 29 00:22:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:22:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:22:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:22:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:22:08 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 166ee078-1135-4ebd-8e9f-0ff7677fed31 does not exist
Nov 29 00:22:08 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0bb85cdb-3b09-40f1-af24-4c120a096a45 does not exist
Nov 29 00:22:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:22:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:22:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:22:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:22:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:22:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:22:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:22:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:22:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:22:13.731 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:22:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:22:13.732 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:22:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:22:13.732 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:22:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:14 np0005539482 podman[187181]: 2025-11-29 05:22:14.989745556 +0000 UTC m=+0.046276527 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:22:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.674509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743674549, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3532468, "memory_usage": 3594216, "flush_reason": "Manual Compaction"}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743703731, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3446643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9697, "largest_seqno": 11740, "table_properties": {"data_size": 3437348, "index_size": 5917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17868, "raw_average_key_size": 19, "raw_value_size": 3418938, "raw_average_value_size": 3724, "num_data_blocks": 269, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393513, "oldest_key_time": 1764393513, "file_creation_time": 1764393743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 29286 microseconds, and 11618 cpu microseconds.
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.703793) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3446643 bytes OK
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.703815) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.705464) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.705487) EVENT_LOG_v1 {"time_micros": 1764393743705479, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.705509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3523933, prev total WAL file size 3523933, number of live WAL files 2.
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.707185) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3365KB)], [26(5930KB)]
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743707295, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9519243, "oldest_snapshot_seqno": -1}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3693 keys, 7908377 bytes, temperature: kUnknown
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743760899, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7908377, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7879989, "index_size": 18038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88684, "raw_average_key_size": 24, "raw_value_size": 7809623, "raw_average_value_size": 2114, "num_data_blocks": 782, "num_entries": 3693, "num_filter_entries": 3693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.761188) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7908377 bytes
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.763326) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.3 rd, 147.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4207, records dropped: 514 output_compression: NoCompression
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.763360) EVENT_LOG_v1 {"time_micros": 1764393743763344, "job": 10, "event": "compaction_finished", "compaction_time_micros": 53688, "compaction_time_cpu_micros": 23687, "output_level": 6, "num_output_files": 1, "total_output_size": 7908377, "num_input_records": 4207, "num_output_records": 3693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743764640, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743766545, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.707062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:22:23 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:22:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:32 np0005539482 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 00:22:32 np0005539482 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:22:32 np0005539482 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:22:32 np0005539482 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:22:32 np0005539482 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:22:32 np0005539482 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:22:32 np0005539482 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:22:32 np0005539482 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:22:33 np0005539482 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 00:22:33 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 00:22:33 np0005539482 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 00:22:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:36 np0005539482 podman[189722]: 2025-11-29 05:22:36.13923098 +0000 UTC m=+0.159435015 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:22:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:40 np0005539482 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 00:22:40 np0005539482 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 00:22:40 np0005539482 systemd[1]: sshd.service: Unit process 180981 (sshd-session) remains running after unit stopped.
Nov 29 00:22:40 np0005539482 systemd[1]: sshd.service: Unit process 180989 (sshd-session) remains running after unit stopped.
Nov 29 00:22:40 np0005539482 systemd[1]: sshd.service: Unit process 189659 (sshd-session) remains running after unit stopped.
Nov 29 00:22:40 np0005539482 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 00:22:40 np0005539482 systemd[1]: sshd.service: Consumed 5.035s CPU time, 40.2M memory peak, read 564.0K from disk, written 152.0K to disk.
Nov 29 00:22:40 np0005539482 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 00:22:40 np0005539482 systemd[1]: Stopping sshd-keygen.target...
Nov 29 00:22:40 np0005539482 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 00:22:40 np0005539482 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 00:22:40 np0005539482 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 00:22:40 np0005539482 systemd[1]: Reached target sshd-keygen.target.
Nov 29 00:22:40 np0005539482 systemd[1]: Starting OpenSSH server daemon...
Nov 29 00:22:40 np0005539482 systemd[1]: Started OpenSSH server daemon.
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:22:41
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:22:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:42 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:22:42 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:22:43 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:43 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:43 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:43 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:22:43 np0005539482 auditd[700]: Audit daemon rotating log files
Nov 29 00:22:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:46 np0005539482 podman[193625]: 2025-11-29 05:22:46.010977122 +0000 UTC m=+0.061130822 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:22:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:47 np0005539482 python3.9[195165]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:22:47 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:47 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:47 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:48 np0005539482 python3.9[196443]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:22:48 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:49 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:49 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:50 np0005539482 python3.9[197551]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:22:50 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:50 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:50 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:22:51 np0005539482 python3.9[198762]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:22:51 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:51 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:51 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:52 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:22:52 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:22:52 np0005539482 systemd[1]: man-db-cache-update.service: Consumed 12.394s CPU time.
Nov 29 00:22:52 np0005539482 systemd[1]: run-r35a3a9f07b1c4a2bbea754b2120e0f87.service: Deactivated successfully.
Nov 29 00:22:52 np0005539482 python3.9[200095]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:22:52 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:53 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:53 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:54 np0005539482 python3.9[200314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:22:54 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:54 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:54 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:22:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:55 np0005539482 python3.9[200504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:22:55 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:55 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:55 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:56 np0005539482 python3.9[200694]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:22:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:57 np0005539482 python3.9[200849]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:22:58 np0005539482 systemd[1]: Reloading.
Nov 29 00:22:59 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:22:59 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:22:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:22:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:00 np0005539482 python3.9[201039]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 00:23:00 np0005539482 systemd[1]: Reloading.
Nov 29 00:23:00 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:23:00 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:23:00 np0005539482 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 00:23:00 np0005539482 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 00:23:01 np0005539482 python3.9[201233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:02 np0005539482 python3.9[201388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:03 np0005539482 python3.9[201543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:04 np0005539482 python3.9[201698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:05 np0005539482 python3.9[201853]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:06 np0005539482 python3.9[202008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:06 np0005539482 podman[202011]: 2025-11-29 05:23:06.324647886 +0000 UTC m=+0.123036181 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 00:23:06 np0005539482 python3.9[202190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:07 np0005539482 python3.9[202345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:08 np0005539482 python3.9[202500]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:23:09 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 262e0792-65b7-4508-bc8e-b7a5a41629cf does not exist
Nov 29 00:23:09 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e9850d3c-6c07-462a-918a-babc66a098dc does not exist
Nov 29 00:23:09 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 72039eec-0efd-43f2-9d3f-dd5d27a4abf8 does not exist
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:23:09 np0005539482 python3.9[202767]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:23:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:10 np0005539482 podman[203057]: 2025-11-29 05:23:10.032353112 +0000 UTC m=+0.049045355 container create 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 00:23:10 np0005539482 systemd[1]: Started libpod-conmon-2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52.scope.
Nov 29 00:23:10 np0005539482 podman[203057]: 2025-11-29 05:23:10.01393602 +0000 UTC m=+0.030628293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:23:10 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:23:10 np0005539482 podman[203057]: 2025-11-29 05:23:10.130776269 +0000 UTC m=+0.147468522 container init 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:23:10 np0005539482 podman[203057]: 2025-11-29 05:23:10.140875357 +0000 UTC m=+0.157567600 container start 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:23:10 np0005539482 podman[203057]: 2025-11-29 05:23:10.144054925 +0000 UTC m=+0.160747188 container attach 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:23:10 np0005539482 funny_visvesvaraya[203094]: 167 167
Nov 29 00:23:10 np0005539482 systemd[1]: libpod-2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52.scope: Deactivated successfully.
Nov 29 00:23:10 np0005539482 podman[203057]: 2025-11-29 05:23:10.152356079 +0000 UTC m=+0.169048352 container died 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:23:10 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9e03f77141c3adb1e7a858aaf99c80bd108061cebc669fc9a227e8efa44c47dc-merged.mount: Deactivated successfully.
Nov 29 00:23:10 np0005539482 podman[203057]: 2025-11-29 05:23:10.198305897 +0000 UTC m=+0.214998180 container remove 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:23:10 np0005539482 systemd[1]: libpod-conmon-2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52.scope: Deactivated successfully.
Nov 29 00:23:10 np0005539482 python3.9[203091]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:10 np0005539482 podman[203117]: 2025-11-29 05:23:10.429732999 +0000 UTC m=+0.055304588 container create 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:23:10 np0005539482 systemd[1]: Started libpod-conmon-581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b.scope.
Nov 29 00:23:10 np0005539482 podman[203117]: 2025-11-29 05:23:10.410659311 +0000 UTC m=+0.036230930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:23:10 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:23:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:10 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:10 np0005539482 podman[203117]: 2025-11-29 05:23:10.538148302 +0000 UTC m=+0.163719911 container init 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:23:10 np0005539482 podman[203117]: 2025-11-29 05:23:10.550973836 +0000 UTC m=+0.176545425 container start 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:23:10 np0005539482 podman[203117]: 2025-11-29 05:23:10.554352239 +0000 UTC m=+0.179923818 container attach 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:23:11 np0005539482 python3.9[203292]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:23:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:23:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:23:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:23:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:23:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:23:11 np0005539482 crazy_blackburn[203136]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:23:11 np0005539482 crazy_blackburn[203136]: --> relative data size: 1.0
Nov 29 00:23:11 np0005539482 crazy_blackburn[203136]: --> All data devices are unavailable
Nov 29 00:23:11 np0005539482 systemd[1]: libpod-581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b.scope: Deactivated successfully.
Nov 29 00:23:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:11 np0005539482 podman[203117]: 2025-11-29 05:23:11.593621207 +0000 UTC m=+1.219192796 container died 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:23:11 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381-merged.mount: Deactivated successfully.
Nov 29 00:23:11 np0005539482 podman[203117]: 2025-11-29 05:23:11.652483192 +0000 UTC m=+1.278054771 container remove 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:23:11 np0005539482 systemd[1]: libpod-conmon-581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b.scope: Deactivated successfully.
Nov 29 00:23:12 np0005539482 podman[203620]: 2025-11-29 05:23:12.270404384 +0000 UTC m=+0.045156240 container create 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:23:12 np0005539482 python3.9[203582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:12 np0005539482 systemd[1]: Started libpod-conmon-6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3.scope.
Nov 29 00:23:12 np0005539482 podman[203620]: 2025-11-29 05:23:12.246284072 +0000 UTC m=+0.021035918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:23:12 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:23:12 np0005539482 podman[203620]: 2025-11-29 05:23:12.364679968 +0000 UTC m=+0.139431814 container init 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:23:12 np0005539482 podman[203620]: 2025-11-29 05:23:12.372794328 +0000 UTC m=+0.147546154 container start 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:23:12 np0005539482 podman[203620]: 2025-11-29 05:23:12.376023967 +0000 UTC m=+0.150775833 container attach 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:23:12 np0005539482 serene_merkle[203638]: 167 167
Nov 29 00:23:12 np0005539482 systemd[1]: libpod-6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3.scope: Deactivated successfully.
Nov 29 00:23:12 np0005539482 podman[203620]: 2025-11-29 05:23:12.378663352 +0000 UTC m=+0.153415208 container died 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:23:12 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0f3d6359353b38356c40e9e077fa9d905c9e550b2d3d7050b13a74cbee5b59a9-merged.mount: Deactivated successfully.
Nov 29 00:23:12 np0005539482 podman[203620]: 2025-11-29 05:23:12.416790958 +0000 UTC m=+0.191542784 container remove 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:23:12 np0005539482 systemd[1]: libpod-conmon-6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3.scope: Deactivated successfully.
Nov 29 00:23:12 np0005539482 podman[203694]: 2025-11-29 05:23:12.588224417 +0000 UTC m=+0.054463878 container create 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:23:12 np0005539482 systemd[1]: Started libpod-conmon-6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a.scope.
Nov 29 00:23:12 np0005539482 podman[203694]: 2025-11-29 05:23:12.568535614 +0000 UTC m=+0.034775115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:23:12 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:23:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:12 np0005539482 podman[203694]: 2025-11-29 05:23:12.695127782 +0000 UTC m=+0.161367243 container init 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 00:23:12 np0005539482 podman[203694]: 2025-11-29 05:23:12.708697405 +0000 UTC m=+0.174936866 container start 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:23:12 np0005539482 podman[203694]: 2025-11-29 05:23:12.712058627 +0000 UTC m=+0.178298108 container attach 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:23:13 np0005539482 python3.9[203836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 00:23:13 np0005539482 pensive_carver[203757]: {
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:    "0": [
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:        {
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "devices": [
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "/dev/loop3"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            ],
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_name": "ceph_lv0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_size": "21470642176",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "name": "ceph_lv0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "tags": {
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cluster_name": "ceph",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.crush_device_class": "",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.encrypted": "0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osd_id": "0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.type": "block",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.vdo": "0"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            },
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "type": "block",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "vg_name": "ceph_vg0"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:        }
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:    ],
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:    "1": [
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:        {
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "devices": [
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "/dev/loop4"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            ],
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_name": "ceph_lv1",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_size": "21470642176",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "name": "ceph_lv1",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "tags": {
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cluster_name": "ceph",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.crush_device_class": "",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.encrypted": "0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osd_id": "1",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.type": "block",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.vdo": "0"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            },
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "type": "block",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "vg_name": "ceph_vg1"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:        }
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:    ],
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:    "2": [
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:        {
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "devices": [
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "/dev/loop5"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            ],
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_name": "ceph_lv2",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_size": "21470642176",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "name": "ceph_lv2",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "tags": {
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.cluster_name": "ceph",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.crush_device_class": "",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.encrypted": "0",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osd_id": "2",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.type": "block",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:                "ceph.vdo": "0"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            },
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "type": "block",
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:            "vg_name": "ceph_vg2"
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:        }
Nov 29 00:23:13 np0005539482 pensive_carver[203757]:    ]
Nov 29 00:23:13 np0005539482 pensive_carver[203757]: }
Nov 29 00:23:13 np0005539482 systemd[1]: libpod-6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a.scope: Deactivated successfully.
Nov 29 00:23:13 np0005539482 podman[203694]: 2025-11-29 05:23:13.440889613 +0000 UTC m=+0.907129064 container died 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:23:13 np0005539482 systemd[1]: var-lib-containers-storage-overlay-96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912-merged.mount: Deactivated successfully.
Nov 29 00:23:13 np0005539482 podman[203694]: 2025-11-29 05:23:13.50674285 +0000 UTC m=+0.972982311 container remove 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:23:13 np0005539482 systemd[1]: libpod-conmon-6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a.scope: Deactivated successfully.
Nov 29 00:23:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:23:13.732 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:23:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:23:13.734 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:23:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:23:13.734 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:23:14 np0005539482 python3.9[204109]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:23:14 np0005539482 podman[204159]: 2025-11-29 05:23:14.156870812 +0000 UTC m=+0.052853008 container create 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:23:14 np0005539482 systemd[1]: Started libpod-conmon-80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c.scope.
Nov 29 00:23:14 np0005539482 podman[204159]: 2025-11-29 05:23:14.128014454 +0000 UTC m=+0.023996680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:23:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:23:14 np0005539482 podman[204159]: 2025-11-29 05:23:14.276587592 +0000 UTC m=+0.172569828 container init 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:23:14 np0005539482 podman[204159]: 2025-11-29 05:23:14.292072932 +0000 UTC m=+0.188055118 container start 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:23:14 np0005539482 nostalgic_pare[204211]: 167 167
Nov 29 00:23:14 np0005539482 podman[204159]: 2025-11-29 05:23:14.295943797 +0000 UTC m=+0.191926023 container attach 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:23:14 np0005539482 systemd[1]: libpod-80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c.scope: Deactivated successfully.
Nov 29 00:23:14 np0005539482 podman[204159]: 2025-11-29 05:23:14.296319956 +0000 UTC m=+0.192302152 container died 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:23:14 np0005539482 systemd[1]: var-lib-containers-storage-overlay-32f5466122754c2f9e74b041d916913fe70b8891ea140c22d479967afa68968b-merged.mount: Deactivated successfully.
Nov 29 00:23:14 np0005539482 podman[204159]: 2025-11-29 05:23:14.338563243 +0000 UTC m=+0.234545429 container remove 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:23:14 np0005539482 systemd[1]: libpod-conmon-80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c.scope: Deactivated successfully.
Nov 29 00:23:14 np0005539482 podman[204311]: 2025-11-29 05:23:14.574048715 +0000 UTC m=+0.059254125 container create d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:23:14 np0005539482 systemd[1]: Started libpod-conmon-d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791.scope.
Nov 29 00:23:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:23:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:23:14 np0005539482 podman[204311]: 2025-11-29 05:23:14.549525574 +0000 UTC m=+0.034730974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:23:14 np0005539482 podman[204311]: 2025-11-29 05:23:14.663593794 +0000 UTC m=+0.148799184 container init d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:23:14 np0005539482 podman[204311]: 2025-11-29 05:23:14.669609651 +0000 UTC m=+0.154815031 container start d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:23:14 np0005539482 podman[204311]: 2025-11-29 05:23:14.672120883 +0000 UTC m=+0.157326263 container attach d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:23:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:14 np0005539482 python3.9[204353]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:23:15 np0005539482 python3.9[204514]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]: {
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "osd_id": 0,
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "type": "bluestore"
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:    },
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "osd_id": 1,
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "type": "bluestore"
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:    },
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "osd_id": 2,
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:        "type": "bluestore"
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]:    }
Nov 29 00:23:15 np0005539482 upbeat_driscoll[204357]: }
Nov 29 00:23:15 np0005539482 systemd[1]: libpod-d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791.scope: Deactivated successfully.
Nov 29 00:23:15 np0005539482 podman[204311]: 2025-11-29 05:23:15.588027861 +0000 UTC m=+1.073233241 container died d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:23:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:15 np0005539482 systemd[1]: var-lib-containers-storage-overlay-0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b-merged.mount: Deactivated successfully.
Nov 29 00:23:15 np0005539482 podman[204311]: 2025-11-29 05:23:15.641553446 +0000 UTC m=+1.126758826 container remove d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:23:15 np0005539482 systemd[1]: libpod-conmon-d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791.scope: Deactivated successfully.
Nov 29 00:23:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:23:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:23:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:23:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:23:15 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 68f1e717-6d27-47be-9785-5b8db4d6ba61 does not exist
Nov 29 00:23:15 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 9fa41ef3-8a02-47c8-b2ed-b30225615518 does not exist
Nov 29 00:23:16 np0005539482 python3.9[204754]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:23:16 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:23:16 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:23:16 np0005539482 podman[204878]: 2025-11-29 05:23:16.98167775 +0000 UTC m=+0.083460700 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 00:23:17 np0005539482 python3.9[204922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:23:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:17 np0005539482 python3.9[205074]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:23:18 np0005539482 python3.9[205226]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:19 np0005539482 python3.9[205351]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393798.163104-554-151506864458786/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:20 np0005539482 python3.9[205503]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:21 np0005539482 python3.9[205628]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393799.8034317-554-151642909029897/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:21 np0005539482 python3.9[205780]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:22 np0005539482 python3.9[205905]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393801.2577085-554-39836754554528/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:23 np0005539482 python3.9[206057]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:24 np0005539482 python3.9[206182]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393802.8274128-554-60614321737235/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:25 np0005539482 python3.9[206334]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:25 np0005539482 python3.9[206459]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393804.2763977-554-9164039879860/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:26 np0005539482 python3.9[206611]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:26 np0005539482 python3.9[206736]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393805.8464243-554-237896874963711/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:27 np0005539482 python3.9[206888]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:28 np0005539482 python3.9[207011]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393807.1231892-554-172629975962439/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:29 np0005539482 python3.9[207163]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:29 np0005539482 python3.9[207288]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393808.4609098-554-165545473784338/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:30 np0005539482 python3.9[207440]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 00:23:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:31 np0005539482 python3.9[207593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:32 np0005539482 python3.9[207745]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:33 np0005539482 python3.9[207898]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:33 np0005539482 python3.9[208050]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:34 np0005539482 python3.9[208202]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:35 np0005539482 python3.9[208354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:36 np0005539482 python3.9[208506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:36 np0005539482 podman[208630]: 2025-11-29 05:23:36.66333089 +0000 UTC m=+0.095655841 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 00:23:36 np0005539482 python3.9[208680]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:37 np0005539482 python3.9[208836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:38 np0005539482 python3.9[208988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:39 np0005539482 python3.9[209140]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:40 np0005539482 python3.9[209292]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:40 np0005539482 python3.9[209444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:23:41
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'volumes', 'images']
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:23:41 np0005539482 python3.9[209596]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:42 np0005539482 python3.9[209748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:42 np0005539482 python3.9[209871]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393821.6863337-775-645609685603/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:43 np0005539482 python3.9[210023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:44 np0005539482 python3.9[210146]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393823.1811814-775-164234158049912/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:45 np0005539482 python3.9[210298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:46 np0005539482 python3.9[210421]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393824.8343341-775-35960863179172/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:46 np0005539482 python3.9[210573]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:47 np0005539482 podman[210670]: 2025-11-29 05:23:47.319130808 +0000 UTC m=+0.065003964 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:23:47 np0005539482 python3.9[210717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393826.2467074-775-83054989246832/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:48 np0005539482 python3.9[210869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:48 np0005539482 python3.9[210992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393827.67271-775-260302718056186/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:49 np0005539482 python3.9[211144]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:50 np0005539482 python3.9[211267]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393829.097444-775-99970257724439/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:51 np0005539482 python3.9[211419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:23:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:51 np0005539482 python3.9[211542]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393830.6536858-775-208646610600719/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:52 np0005539482 python3.9[211695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:53 np0005539482 python3.9[211818]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393832.2176907-775-58879929467664/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:54 np0005539482 python3.9[211970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:23:54 np0005539482 python3.9[212093]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393833.6335523-775-53640796163593/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:55 np0005539482 python3.9[212245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:56 np0005539482 python3.9[212368]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393835.0826201-775-221247073141783/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:57 np0005539482 python3.9[212520]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:57 np0005539482 python3.9[212643]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393836.5555768-775-50405147168642/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:58 np0005539482 python3.9[212795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:59 np0005539482 python3.9[212918]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393837.9073584-775-85194055376253/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:23:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:23:59 np0005539482 python3.9[213070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:23:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:00 np0005539482 python3.9[213193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393839.2397733-775-150481900540856/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:00 np0005539482 python3.9[213345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:01 np0005539482 python3.9[213468]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393840.4134438-775-65207750387281/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:02 np0005539482 python3.9[213618]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:24:03 np0005539482 python3.9[213773]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 00:24:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:05 np0005539482 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 00:24:05 np0005539482 python3.9[213929]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:06 np0005539482 python3.9[214081]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:06 np0005539482 podman[214233]: 2025-11-29 05:24:06.879453705 +0000 UTC m=+0.108149667 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:24:06 np0005539482 python3.9[214234]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:07 np0005539482 python3.9[214414]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:08 np0005539482 python3.9[214566]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:09 np0005539482 python3.9[214718]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:10 np0005539482 python3.9[214870]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:10 np0005539482 python3.9[215022]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:24:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:24:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:24:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:24:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:24:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:24:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:11 np0005539482 python3.9[215174]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:12 np0005539482 python3.9[215327]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:13 np0005539482 python3.9[215479]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:24:13 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:13 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:13 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:24:13.734 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:24:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:24:13.735 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:24:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:24:13.735 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:24:13 np0005539482 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 00:24:13 np0005539482 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 00:24:13 np0005539482 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 00:24:13 np0005539482 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 00:24:13 np0005539482 systemd[1]: Starting libvirt logging daemon...
Nov 29 00:24:13 np0005539482 systemd[1]: Started libvirt logging daemon.
Nov 29 00:24:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:14 np0005539482 python3.9[215672]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:24:14 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:15 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:15 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:15 np0005539482 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 00:24:15 np0005539482 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 00:24:15 np0005539482 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 00:24:15 np0005539482 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 00:24:15 np0005539482 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 00:24:15 np0005539482 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 00:24:15 np0005539482 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 00:24:15 np0005539482 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 00:24:15 np0005539482 systemd[1]: Started libvirt nodedev daemon.
Nov 29 00:24:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:15 np0005539482 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 00:24:15 np0005539482 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 00:24:15 np0005539482 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 00:24:16 np0005539482 python3.9[215949]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:24:16 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:16 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:16 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:16 np0005539482 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 00:24:16 np0005539482 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 00:24:16 np0005539482 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 00:24:16 np0005539482 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 00:24:16 np0005539482 systemd[1]: Starting libvirt proxy daemon...
Nov 29 00:24:16 np0005539482 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 68bc54e6-e2dc-49b4-b12f-1375125e19a3
Nov 29 00:24:16 np0005539482 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 00:24:16 np0005539482 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 68bc54e6-e2dc-49b4-b12f-1375125e19a3
Nov 29 00:24:16 np0005539482 systemd[1]: Started libvirt proxy daemon.
Nov 29 00:24:16 np0005539482 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:24:16 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 5fc82019-45c9-4cc5-b242-7f76948dbbbf does not exist
Nov 29 00:24:16 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6c387c9b-f21e-434b-865e-16297cfc1046 does not exist
Nov 29 00:24:16 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ea77a73d-224d-48c5-a8b4-baa6bbb0ceba does not exist
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:24:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:24:17 np0005539482 podman[216328]: 2025-11-29 05:24:17.462113458 +0000 UTC m=+0.079888706 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:24:17 np0005539482 podman[216402]: 2025-11-29 05:24:17.532422827 +0000 UTC m=+0.041650680 container create 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 00:24:17 np0005539482 systemd[1]: Started libpod-conmon-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope.
Nov 29 00:24:17 np0005539482 podman[216402]: 2025-11-29 05:24:17.513642251 +0000 UTC m=+0.022870104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:24:17 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:24:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:17 np0005539482 podman[216402]: 2025-11-29 05:24:17.626439828 +0000 UTC m=+0.135667701 container init 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:24:17 np0005539482 podman[216402]: 2025-11-29 05:24:17.635885982 +0000 UTC m=+0.145113815 container start 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:24:17 np0005539482 podman[216402]: 2025-11-29 05:24:17.639333133 +0000 UTC m=+0.148560986 container attach 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 00:24:17 np0005539482 dreamy_fermat[216418]: 167 167
Nov 29 00:24:17 np0005539482 systemd[1]: libpod-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope: Deactivated successfully.
Nov 29 00:24:17 np0005539482 conmon[216418]: conmon 5a8d825701377ad69289 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope/container/memory.events
Nov 29 00:24:17 np0005539482 podman[216402]: 2025-11-29 05:24:17.642003417 +0000 UTC m=+0.151231250 container died 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:24:17 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3c832cf54ee8f31f27ee57202882839be2752d475145bdbb275889a3dfb279c9-merged.mount: Deactivated successfully.
Nov 29 00:24:17 np0005539482 podman[216402]: 2025-11-29 05:24:17.678821311 +0000 UTC m=+0.188049134 container remove 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 00:24:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:24:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:24:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:24:17 np0005539482 systemd[1]: libpod-conmon-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope: Deactivated successfully.
Nov 29 00:24:17 np0005539482 python3.9[216396]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:24:17 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:17 np0005539482 podman[216444]: 2025-11-29 05:24:17.84902735 +0000 UTC m=+0.046306780 container create f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:24:17 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:17 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:17 np0005539482 podman[216444]: 2025-11-29 05:24:17.825719336 +0000 UTC m=+0.022998776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:24:18 np0005539482 systemd[1]: Started libpod-conmon-f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362.scope.
Nov 29 00:24:18 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:24:18 np0005539482 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 00:24:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:18 np0005539482 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 00:24:18 np0005539482 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 00:24:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:18 np0005539482 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 00:24:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:18 np0005539482 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 00:24:18 np0005539482 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 00:24:18 np0005539482 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 00:24:18 np0005539482 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 00:24:18 np0005539482 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 00:24:18 np0005539482 podman[216444]: 2025-11-29 05:24:18.202640102 +0000 UTC m=+0.399919552 container init f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:24:18 np0005539482 podman[216444]: 2025-11-29 05:24:18.216763147 +0000 UTC m=+0.414042577 container start f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:24:18 np0005539482 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 00:24:18 np0005539482 podman[216444]: 2025-11-29 05:24:18.223582129 +0000 UTC m=+0.420861609 container attach f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:24:18 np0005539482 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 00:24:18 np0005539482 systemd[1]: Started libvirt QEMU daemon.
Nov 29 00:24:19 np0005539482 python3.9[216680]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:24:19 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:19 np0005539482 xenodochial_diffie[216495]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:24:19 np0005539482 xenodochial_diffie[216495]: --> relative data size: 1.0
Nov 29 00:24:19 np0005539482 xenodochial_diffie[216495]: --> All data devices are unavailable
Nov 29 00:24:19 np0005539482 podman[216444]: 2025-11-29 05:24:19.280956931 +0000 UTC m=+1.478236341 container died f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:24:19 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:19 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:19 np0005539482 systemd[1]: libpod-f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362.scope: Deactivated successfully.
Nov 29 00:24:19 np0005539482 systemd[1]: var-lib-containers-storage-overlay-91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a-merged.mount: Deactivated successfully.
Nov 29 00:24:19 np0005539482 podman[216444]: 2025-11-29 05:24:19.577296692 +0000 UTC m=+1.774576092 container remove f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:24:19 np0005539482 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 00:24:19 np0005539482 systemd[1]: libpod-conmon-f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362.scope: Deactivated successfully.
Nov 29 00:24:19 np0005539482 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 00:24:19 np0005539482 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 00:24:19 np0005539482 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 00:24:19 np0005539482 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 00:24:19 np0005539482 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 00:24:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:19 np0005539482 systemd[1]: Starting libvirt secret daemon...
Nov 29 00:24:19 np0005539482 systemd[1]: Started libvirt secret daemon.
Nov 29 00:24:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:20 np0005539482 podman[217052]: 2025-11-29 05:24:20.402728991 +0000 UTC m=+0.038801632 container create 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:24:20 np0005539482 systemd[1]: Started libpod-conmon-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope.
Nov 29 00:24:20 np0005539482 podman[217052]: 2025-11-29 05:24:20.385347609 +0000 UTC m=+0.021420240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:24:20 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:24:20 np0005539482 podman[217052]: 2025-11-29 05:24:20.507393365 +0000 UTC m=+0.143466016 container init 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:24:20 np0005539482 podman[217052]: 2025-11-29 05:24:20.515463386 +0000 UTC m=+0.151536017 container start 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:24:20 np0005539482 podman[217052]: 2025-11-29 05:24:20.518233471 +0000 UTC m=+0.154306132 container attach 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:24:20 np0005539482 systemd[1]: libpod-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope: Deactivated successfully.
Nov 29 00:24:20 np0005539482 interesting_cannon[217082]: 167 167
Nov 29 00:24:20 np0005539482 conmon[217082]: conmon 90a4438b9678e4a0f3b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope/container/memory.events
Nov 29 00:24:20 np0005539482 podman[217052]: 2025-11-29 05:24:20.527974862 +0000 UTC m=+0.164047513 container died 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:24:20 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3c1adb0195826ab8cbd635a3484508200fc8dde6f341d68b330d89936c7c086a-merged.mount: Deactivated successfully.
Nov 29 00:24:20 np0005539482 podman[217052]: 2025-11-29 05:24:20.570622445 +0000 UTC m=+0.206695086 container remove 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:24:20 np0005539482 systemd[1]: libpod-conmon-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope: Deactivated successfully.
Nov 29 00:24:20 np0005539482 python3.9[217079]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:20 np0005539482 podman[217129]: 2025-11-29 05:24:20.771366438 +0000 UTC m=+0.053606512 container create a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:24:20 np0005539482 systemd[1]: Started libpod-conmon-a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533.scope.
Nov 29 00:24:20 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:24:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:20 np0005539482 podman[217129]: 2025-11-29 05:24:20.84392345 +0000 UTC m=+0.126163504 container init a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:24:20 np0005539482 podman[217129]: 2025-11-29 05:24:20.751322873 +0000 UTC m=+0.033562937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:24:20 np0005539482 podman[217129]: 2025-11-29 05:24:20.851206973 +0000 UTC m=+0.133447027 container start a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:24:20 np0005539482 podman[217129]: 2025-11-29 05:24:20.854665956 +0000 UTC m=+0.136906020 container attach a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:24:21 np0005539482 python3.9[217277]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]: {
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:    "0": [
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:        {
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "devices": [
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "/dev/loop3"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            ],
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_name": "ceph_lv0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_size": "21470642176",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "name": "ceph_lv0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "tags": {
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cluster_name": "ceph",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.crush_device_class": "",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.encrypted": "0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osd_id": "0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.type": "block",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.vdo": "0"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            },
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "type": "block",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "vg_name": "ceph_vg0"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:        }
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:    ],
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:    "1": [
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:        {
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "devices": [
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "/dev/loop4"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            ],
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_name": "ceph_lv1",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_size": "21470642176",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "name": "ceph_lv1",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "tags": {
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cluster_name": "ceph",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.crush_device_class": "",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.encrypted": "0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osd_id": "1",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.type": "block",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.vdo": "0"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            },
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "type": "block",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "vg_name": "ceph_vg1"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:        }
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:    ],
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:    "2": [
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:        {
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "devices": [
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "/dev/loop5"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            ],
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_name": "ceph_lv2",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_size": "21470642176",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "name": "ceph_lv2",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "tags": {
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.cluster_name": "ceph",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.crush_device_class": "",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.encrypted": "0",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osd_id": "2",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.type": "block",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:                "ceph.vdo": "0"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            },
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "type": "block",
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:            "vg_name": "ceph_vg2"
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:        }
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]:    ]
Nov 29 00:24:21 np0005539482 gifted_lichterman[217168]: }
Nov 29 00:24:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:21 np0005539482 systemd[1]: libpod-a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533.scope: Deactivated successfully.
Nov 29 00:24:21 np0005539482 podman[217129]: 2025-11-29 05:24:21.628617202 +0000 UTC m=+0.910857336 container died a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:24:21 np0005539482 systemd[1]: var-lib-containers-storage-overlay-866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699-merged.mount: Deactivated successfully.
Nov 29 00:24:21 np0005539482 podman[217129]: 2025-11-29 05:24:21.698126891 +0000 UTC m=+0.980366935 container remove a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:24:21 np0005539482 systemd[1]: libpod-conmon-a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533.scope: Deactivated successfully.
Nov 29 00:24:22 np0005539482 python3.9[217489]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:24:22 np0005539482 podman[217637]: 2025-11-29 05:24:22.499782225 +0000 UTC m=+0.066735075 container create 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:24:22 np0005539482 systemd[1]: Started libpod-conmon-687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148.scope.
Nov 29 00:24:22 np0005539482 podman[217637]: 2025-11-29 05:24:22.468629485 +0000 UTC m=+0.035582375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:24:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:24:22 np0005539482 podman[217637]: 2025-11-29 05:24:22.606965638 +0000 UTC m=+0.173918528 container init 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:24:22 np0005539482 podman[217637]: 2025-11-29 05:24:22.618922182 +0000 UTC m=+0.185875012 container start 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:24:22 np0005539482 podman[217637]: 2025-11-29 05:24:22.622294172 +0000 UTC m=+0.189247072 container attach 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:24:22 np0005539482 kind_kilby[217689]: 167 167
Nov 29 00:24:22 np0005539482 systemd[1]: libpod-687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148.scope: Deactivated successfully.
Nov 29 00:24:22 np0005539482 podman[217637]: 2025-11-29 05:24:22.626925333 +0000 UTC m=+0.193878173 container died 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:24:22 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3133ca2d5e1950a0d5399609e4a551806d9dbda54bdeda3195f2b7615e6db010-merged.mount: Deactivated successfully.
Nov 29 00:24:22 np0005539482 podman[217637]: 2025-11-29 05:24:22.681352613 +0000 UTC m=+0.248305453 container remove 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:24:22 np0005539482 systemd[1]: libpod-conmon-687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148.scope: Deactivated successfully.
Nov 29 00:24:22 np0005539482 podman[217780]: 2025-11-29 05:24:22.902423229 +0000 UTC m=+0.055668911 container create a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:24:22 np0005539482 python3.9[217774]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 00:24:22 np0005539482 systemd[1]: Started libpod-conmon-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope.
Nov 29 00:24:22 np0005539482 podman[217780]: 2025-11-29 05:24:22.876489085 +0000 UTC m=+0.029734847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:24:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:24:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:24:23 np0005539482 podman[217780]: 2025-11-29 05:24:23.025730806 +0000 UTC m=+0.178976598 container init a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:24:23 np0005539482 podman[217780]: 2025-11-29 05:24:23.03852982 +0000 UTC m=+0.191775502 container start a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:24:23 np0005539482 podman[217780]: 2025-11-29 05:24:23.042079284 +0000 UTC m=+0.195325006 container attach a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:24:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:23 np0005539482 python3.9[217958]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]: {
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "osd_id": 0,
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "type": "bluestore"
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:    },
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "osd_id": 1,
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "type": "bluestore"
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:    },
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "osd_id": 2,
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:        "type": "bluestore"
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]:    }
Nov 29 00:24:24 np0005539482 hopeful_herschel[217796]: }
Nov 29 00:24:24 np0005539482 systemd[1]: libpod-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope: Deactivated successfully.
Nov 29 00:24:24 np0005539482 systemd[1]: libpod-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope: Consumed 1.096s CPU time.
Nov 29 00:24:24 np0005539482 podman[217780]: 2025-11-29 05:24:24.13499253 +0000 UTC m=+1.288238212 container died a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:24:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500-merged.mount: Deactivated successfully.
Nov 29 00:24:24 np0005539482 podman[217780]: 2025-11-29 05:24:24.196721715 +0000 UTC m=+1.349967427 container remove a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:24:24 np0005539482 systemd[1]: libpod-conmon-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope: Deactivated successfully.
Nov 29 00:24:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:24:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:24:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:24:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:24:24 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 9db73650-75a4-4165-bc3d-b2338aa3177b does not exist
Nov 29 00:24:24 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 5e59ea50-b21e-4abf-8ec6-ba181db911c4 does not exist
Nov 29 00:24:24 np0005539482 python3.9[218161]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393863.3929856-1133-194156182925601/.source.xml follow=False _original_basename=secret.xml.j2 checksum=6a747b6a02a8b21427ead7222f3616a6bd64ba4d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:24:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:24:25 np0005539482 python3.9[218315]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 93f82912-647c-5e78-b081-707d0a2966d8#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:24:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:26 np0005539482 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 00:24:26 np0005539482 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 00:24:27 np0005539482 python3.9[218477]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:29 np0005539482 python3.9[218940]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:30 np0005539482 python3.9[219092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:31 np0005539482 python3.9[219215]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393869.9537823-1188-44964661678616/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:32 np0005539482 python3.9[219368]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:33 np0005539482 python3.9[219520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:33 np0005539482 python3.9[219598]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:34 np0005539482 python3.9[219750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:35 np0005539482 python3.9[219828]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ljyenoij recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:35 np0005539482 python3.9[219980]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:36 np0005539482 python3.9[220058]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:37 np0005539482 podman[220136]: 2025-11-29 05:24:37.091217018 +0000 UTC m=+0.128292277 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 00:24:37 np0005539482 python3.9[220237]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:24:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:38 np0005539482 python3[220390]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 00:24:39 np0005539482 python3.9[220542]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:40 np0005539482 python3.9[220620]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:41 np0005539482 python3.9[220772]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:24:41
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', '.mgr', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:24:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:41 np0005539482 python3.9[220850]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:42 np0005539482 python3.9[221002]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:43 np0005539482 python3.9[221080]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:44 np0005539482 python3.9[221232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:44 np0005539482 python3.9[221310]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:45 np0005539482 python3.9[221462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:46 np0005539482 python3.9[221587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393884.902703-1313-175715023370737/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:46 np0005539482 python3.9[221739]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:47 np0005539482 podman[221891]: 2025-11-29 05:24:47.613001214 +0000 UTC m=+0.084508062 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 00:24:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:47 np0005539482 python3.9[221892]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:24:48 np0005539482 python3.9[222065]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:49 np0005539482 python3.9[222217]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:24:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:50 np0005539482 python3.9[222370]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:24:51 np0005539482 python3.9[222525]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:24:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:52 np0005539482 python3.9[222680]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:53 np0005539482 python3.9[222832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:53 np0005539482 python3.9[222955]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393892.425085-1385-269280242915304/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:54 np0005539482 python3.9[223107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:24:55 np0005539482 python3.9[223230]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393893.933264-1400-9072851297326/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:56 np0005539482 python3.9[223382]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:24:56 np0005539482 python3.9[223505]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393895.577155-1415-364850219449/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:24:57 np0005539482 python3.9[223657]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:24:57 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:57 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:57 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:57 np0005539482 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 00:24:58 np0005539482 python3.9[223847]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 00:24:58 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:58 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:58 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:59 np0005539482 systemd[1]: Reloading.
Nov 29 00:24:59 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:24:59 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:24:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:24:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:00 np0005539482 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 00:25:00 np0005539482 systemd[1]: session-48.scope: Consumed 3min 44.583s CPU time.
Nov 29 00:25:00 np0005539482 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Nov 29 00:25:00 np0005539482 systemd-logind[793]: Removed session 48.
Nov 29 00:25:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:06 np0005539482 systemd-logind[793]: New session 49 of user zuul.
Nov 29 00:25:06 np0005539482 systemd[1]: Started Session 49 of User zuul.
Nov 29 00:25:07 np0005539482 podman[224073]: 2025-11-29 05:25:07.342677123 +0000 UTC m=+0.110854477 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 00:25:07 np0005539482 python3.9[224112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:25:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:08 np0005539482 python3.9[224282]: ansible-ansible.builtin.service_facts Invoked
Nov 29 00:25:09 np0005539482 network[224300]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 00:25:09 np0005539482 network[224301]: 'network-scripts' will be removed from distribution in near future.
Nov 29 00:25:09 np0005539482 network[224302]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 00:25:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:25:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:25:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:25:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:25:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:25:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:25:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:25:13.735 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:25:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:25:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:25:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:25:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:25:13 np0005539482 python3.9[224574]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 00:25:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:15 np0005539482 python3.9[224658]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:25:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:18 np0005539482 podman[224660]: 2025-11-29 05:25:18.008314853 +0000 UTC m=+0.065870614 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 00:25:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:21 np0005539482 python3.9[224831]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:25:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:22 np0005539482 python3.9[224983]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:25:23 np0005539482 python3.9[225136]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:25:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:23 np0005539482 python3.9[225288]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:25:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:24 np0005539482 python3.9[225514]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:25:25 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 46a12d9b-c7b4-48cd-9f6b-5ec3dac912e0 does not exist
Nov 29 00:25:25 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 4a3ae231-556f-442f-8435-19350988ebc3 does not exist
Nov 29 00:25:25 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 88fbd292-bdc0-4b31-ae16-7731ae8cd413 does not exist
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:25:25 np0005539482 python3.9[225720]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393924.2416449-95-22764366444102/.source.iscsi _original_basename=.f2km9x74 follow=False checksum=1dbba20fb5a1b47e97ac8ad50a96437d1e78147b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:25:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:25:25 np0005539482 podman[225896]: 2025-11-29 05:25:25.994631409 +0000 UTC m=+0.057037951 container create 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 00:25:26 np0005539482 systemd[1]: Started libpod-conmon-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope.
Nov 29 00:25:26 np0005539482 podman[225896]: 2025-11-29 05:25:25.968990979 +0000 UTC m=+0.031397611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:25:26 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:25:26 np0005539482 podman[225896]: 2025-11-29 05:25:26.085346812 +0000 UTC m=+0.147753394 container init 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:25:26 np0005539482 podman[225896]: 2025-11-29 05:25:26.096247803 +0000 UTC m=+0.158654345 container start 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:25:26 np0005539482 podman[225896]: 2025-11-29 05:25:26.100099561 +0000 UTC m=+0.162506103 container attach 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:25:26 np0005539482 great_kepler[225930]: 167 167
Nov 29 00:25:26 np0005539482 systemd[1]: libpod-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope: Deactivated successfully.
Nov 29 00:25:26 np0005539482 conmon[225930]: conmon 7617823e2dc0fb412264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope/container/memory.events
Nov 29 00:25:26 np0005539482 podman[225896]: 2025-11-29 05:25:26.106452828 +0000 UTC m=+0.168859370 container died 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:25:26 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1a5896b736dc73252fc2961ad72e9039df518d50a41a58b7828652753f10a2b7-merged.mount: Deactivated successfully.
Nov 29 00:25:26 np0005539482 podman[225896]: 2025-11-29 05:25:26.15621828 +0000 UTC m=+0.218624822 container remove 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:25:26 np0005539482 systemd[1]: libpod-conmon-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope: Deactivated successfully.
Nov 29 00:25:26 np0005539482 podman[225979]: 2025-11-29 05:25:26.356603563 +0000 UTC m=+0.054917062 container create 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:25:26 np0005539482 systemd[1]: Started libpod-conmon-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope.
Nov 29 00:25:26 np0005539482 podman[225979]: 2025-11-29 05:25:26.337393683 +0000 UTC m=+0.035707262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:25:26 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:25:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:26 np0005539482 podman[225979]: 2025-11-29 05:25:26.473523479 +0000 UTC m=+0.171837008 container init 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:25:26 np0005539482 podman[225979]: 2025-11-29 05:25:26.486604979 +0000 UTC m=+0.184918518 container start 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:25:26 np0005539482 podman[225979]: 2025-11-29 05:25:26.491433891 +0000 UTC m=+0.189747390 container attach 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:25:26 np0005539482 python3.9[226052]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:27 np0005539482 python3.9[226214]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:27 np0005539482 optimistic_kirch[226036]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:25:27 np0005539482 optimistic_kirch[226036]: --> relative data size: 1.0
Nov 29 00:25:27 np0005539482 optimistic_kirch[226036]: --> All data devices are unavailable
Nov 29 00:25:27 np0005539482 systemd[1]: libpod-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope: Deactivated successfully.
Nov 29 00:25:27 np0005539482 podman[225979]: 2025-11-29 05:25:27.642288345 +0000 UTC m=+1.340601834 container died 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:25:27 np0005539482 systemd[1]: libpod-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope: Consumed 1.084s CPU time.
Nov 29 00:25:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:27 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b-merged.mount: Deactivated successfully.
Nov 29 00:25:27 np0005539482 podman[225979]: 2025-11-29 05:25:27.819448875 +0000 UTC m=+1.517762404 container remove 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:25:27 np0005539482 systemd[1]: libpod-conmon-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope: Deactivated successfully.
Nov 29 00:25:28 np0005539482 podman[226535]: 2025-11-29 05:25:28.583883274 +0000 UTC m=+0.048518596 container create 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:25:28 np0005539482 systemd[1]: Started libpod-conmon-9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8.scope.
Nov 29 00:25:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:25:28 np0005539482 podman[226535]: 2025-11-29 05:25:28.657608378 +0000 UTC m=+0.122243740 container init 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:25:28 np0005539482 podman[226535]: 2025-11-29 05:25:28.565062682 +0000 UTC m=+0.029698044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:25:28 np0005539482 podman[226535]: 2025-11-29 05:25:28.667864743 +0000 UTC m=+0.132500055 container start 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:25:28 np0005539482 podman[226535]: 2025-11-29 05:25:28.671913676 +0000 UTC m=+0.136549028 container attach 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:25:28 np0005539482 kind_noether[226554]: 167 167
Nov 29 00:25:28 np0005539482 systemd[1]: libpod-9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8.scope: Deactivated successfully.
Nov 29 00:25:28 np0005539482 podman[226535]: 2025-11-29 05:25:28.674937196 +0000 UTC m=+0.139572508 container died 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:25:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7b0a904cad86c7db57501565daadf43bb7182743c14c9dadc818be633bc082fe-merged.mount: Deactivated successfully.
Nov 29 00:25:28 np0005539482 podman[226535]: 2025-11-29 05:25:28.715911296 +0000 UTC m=+0.180546608 container remove 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:25:28 np0005539482 systemd[1]: libpod-conmon-9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8.scope: Deactivated successfully.
Nov 29 00:25:28 np0005539482 python3.9[226543]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:25:28 np0005539482 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 00:25:28 np0005539482 podman[226581]: 2025-11-29 05:25:28.9415764 +0000 UTC m=+0.070348757 container create fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:25:28 np0005539482 systemd[1]: Started libpod-conmon-fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03.scope.
Nov 29 00:25:29 np0005539482 podman[226581]: 2025-11-29 05:25:28.917117438 +0000 UTC m=+0.045889575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:25:29 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:25:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:29 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:29 np0005539482 podman[226581]: 2025-11-29 05:25:29.039008868 +0000 UTC m=+0.167781005 container init fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:25:29 np0005539482 podman[226581]: 2025-11-29 05:25:29.055389154 +0000 UTC m=+0.184161241 container start fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:25:29 np0005539482 podman[226581]: 2025-11-29 05:25:29.059295624 +0000 UTC m=+0.188067761 container attach fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:25:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:29 np0005539482 eager_gould[226612]: {
Nov 29 00:25:29 np0005539482 eager_gould[226612]:    "0": [
Nov 29 00:25:29 np0005539482 eager_gould[226612]:        {
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "devices": [
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "/dev/loop3"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            ],
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_name": "ceph_lv0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_size": "21470642176",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "name": "ceph_lv0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "tags": {
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cluster_name": "ceph",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.crush_device_class": "",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.encrypted": "0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osd_id": "0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.type": "block",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.vdo": "0"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            },
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "type": "block",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "vg_name": "ceph_vg0"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:        }
Nov 29 00:25:29 np0005539482 eager_gould[226612]:    ],
Nov 29 00:25:29 np0005539482 eager_gould[226612]:    "1": [
Nov 29 00:25:29 np0005539482 eager_gould[226612]:        {
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "devices": [
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "/dev/loop4"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            ],
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_name": "ceph_lv1",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_size": "21470642176",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "name": "ceph_lv1",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "tags": {
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cluster_name": "ceph",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.crush_device_class": "",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.encrypted": "0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osd_id": "1",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.type": "block",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.vdo": "0"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            },
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "type": "block",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "vg_name": "ceph_vg1"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:        }
Nov 29 00:25:29 np0005539482 eager_gould[226612]:    ],
Nov 29 00:25:29 np0005539482 eager_gould[226612]:    "2": [
Nov 29 00:25:29 np0005539482 eager_gould[226612]:        {
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "devices": [
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "/dev/loop5"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            ],
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_name": "ceph_lv2",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_size": "21470642176",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "name": "ceph_lv2",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "tags": {
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.cluster_name": "ceph",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.crush_device_class": "",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.encrypted": "0",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osd_id": "2",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.type": "block",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:                "ceph.vdo": "0"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            },
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "type": "block",
Nov 29 00:25:29 np0005539482 eager_gould[226612]:            "vg_name": "ceph_vg2"
Nov 29 00:25:29 np0005539482 eager_gould[226612]:        }
Nov 29 00:25:29 np0005539482 eager_gould[226612]:    ]
Nov 29 00:25:29 np0005539482 eager_gould[226612]: }
Nov 29 00:25:29 np0005539482 systemd[1]: libpod-fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03.scope: Deactivated successfully.
Nov 29 00:25:29 np0005539482 podman[226581]: 2025-11-29 05:25:29.858202655 +0000 UTC m=+0.986974742 container died fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:25:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d-merged.mount: Deactivated successfully.
Nov 29 00:25:29 np0005539482 python3.9[226756]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:25:29 np0005539482 podman[226581]: 2025-11-29 05:25:29.936957464 +0000 UTC m=+1.065729551 container remove fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 00:25:29 np0005539482 systemd[1]: libpod-conmon-fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03.scope: Deactivated successfully.
Nov 29 00:25:29 np0005539482 systemd[1]: Reloading.
Nov 29 00:25:30 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:25:30 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:25:30 np0005539482 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 00:25:30 np0005539482 systemd[1]: Starting Open-iSCSI...
Nov 29 00:25:30 np0005539482 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 00:25:30 np0005539482 systemd[1]: Started Open-iSCSI.
Nov 29 00:25:30 np0005539482 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 00:25:30 np0005539482 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 00:25:30 np0005539482 podman[227010]: 2025-11-29 05:25:30.822069666 +0000 UTC m=+0.038194510 container create 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:25:30 np0005539482 systemd[1]: Started libpod-conmon-8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08.scope.
Nov 29 00:25:30 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:25:30 np0005539482 podman[227010]: 2025-11-29 05:25:30.896427573 +0000 UTC m=+0.112552457 container init 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:25:30 np0005539482 podman[227010]: 2025-11-29 05:25:30.806779674 +0000 UTC m=+0.022904538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:25:30 np0005539482 podman[227010]: 2025-11-29 05:25:30.902537953 +0000 UTC m=+0.118662797 container start 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:25:30 np0005539482 podman[227010]: 2025-11-29 05:25:30.905624024 +0000 UTC m=+0.121748918 container attach 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 00:25:30 np0005539482 pedantic_cray[227056]: 167 167
Nov 29 00:25:30 np0005539482 systemd[1]: libpod-8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08.scope: Deactivated successfully.
Nov 29 00:25:30 np0005539482 podman[227010]: 2025-11-29 05:25:30.911172612 +0000 UTC m=+0.127297496 container died 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:25:30 np0005539482 systemd[1]: var-lib-containers-storage-overlay-16f1f9d32f760be6161f8a76977e729ad3d73f5761a1061061789f4ec0636388-merged.mount: Deactivated successfully.
Nov 29 00:25:30 np0005539482 podman[227010]: 2025-11-29 05:25:30.940750811 +0000 UTC m=+0.156875655 container remove 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 00:25:30 np0005539482 systemd[1]: libpod-conmon-8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08.scope: Deactivated successfully.
Nov 29 00:25:31 np0005539482 podman[227153]: 2025-11-29 05:25:31.12791292 +0000 UTC m=+0.047797758 container create dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:25:31 np0005539482 systemd[1]: Started libpod-conmon-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope.
Nov 29 00:25:31 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:25:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:31 np0005539482 podman[227153]: 2025-11-29 05:25:31.107739597 +0000 UTC m=+0.027624475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:25:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:25:31 np0005539482 podman[227153]: 2025-11-29 05:25:31.217172121 +0000 UTC m=+0.137056989 container init dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:25:31 np0005539482 podman[227153]: 2025-11-29 05:25:31.23802737 +0000 UTC m=+0.157912238 container start dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:25:31 np0005539482 podman[227153]: 2025-11-29 05:25:31.241908758 +0000 UTC m=+0.161793606 container attach dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:25:31 np0005539482 python3.9[227161]: ansible-ansible.builtin.service_facts Invoked
Nov 29 00:25:31 np0005539482 network[227192]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 00:25:31 np0005539482 network[227193]: 'network-scripts' will be removed from distribution in near future.
Nov 29 00:25:31 np0005539482 network[227194]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 00:25:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]: {
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "osd_id": 0,
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "type": "bluestore"
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:    },
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "osd_id": 1,
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "type": "bluestore"
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:    },
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "osd_id": 2,
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:        "type": "bluestore"
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]:    }
Nov 29 00:25:32 np0005539482 xenodochial_wilson[227171]: }
Nov 29 00:25:32 np0005539482 systemd[1]: libpod-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope: Deactivated successfully.
Nov 29 00:25:32 np0005539482 systemd[1]: libpod-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope: Consumed 1.073s CPU time.
Nov 29 00:25:32 np0005539482 podman[227232]: 2025-11-29 05:25:32.363180464 +0000 UTC m=+0.034835471 container died dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:25:32 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4-merged.mount: Deactivated successfully.
Nov 29 00:25:32 np0005539482 podman[227232]: 2025-11-29 05:25:32.42129876 +0000 UTC m=+0.092953697 container remove dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:25:32 np0005539482 systemd[1]: libpod-conmon-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope: Deactivated successfully.
Nov 29 00:25:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:25:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:25:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:25:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:25:32 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d465fc83-1791-4aa7-89a2-e21da226cc45 does not exist
Nov 29 00:25:32 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6b0d1509-4b22-4fca-995c-e7fa208ec254 does not exist
Nov 29 00:25:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:25:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:25:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:35 np0005539482 python3.9[227560]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 00:25:36 np0005539482 python3.9[227712]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 00:25:37 np0005539482 podman[227840]: 2025-11-29 05:25:37.649649025 +0000 UTC m=+0.115075725 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 00:25:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:37 np0005539482 python3.9[227886]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:25:38 np0005539482 python3.9[228017]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393937.2381482-172-138585904918642/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:39 np0005539482 python3.9[228169]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:40 np0005539482 python3.9[228321]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:25:40 np0005539482 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 00:25:40 np0005539482 systemd[1]: Stopped Load Kernel Modules.
Nov 29 00:25:40 np0005539482 systemd[1]: Stopping Load Kernel Modules...
Nov 29 00:25:40 np0005539482 systemd[1]: Starting Load Kernel Modules...
Nov 29 00:25:40 np0005539482 systemd[1]: Finished Load Kernel Modules.
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:25:41
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'vms']
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:25:41 np0005539482 python3.9[228477]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:25:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:42 np0005539482 python3.9[228629]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:25:42 np0005539482 python3.9[228781]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:25:43 np0005539482 python3.9[228933]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:25:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:44 np0005539482 python3.9[229056]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393943.111588-230-109934371066025/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:45 np0005539482 python3.9[229208]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:25:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:45 np0005539482 python3.9[229363]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:46 np0005539482 python3.9[229515]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:47 np0005539482 python3.9[229667]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:48 np0005539482 podman[229791]: 2025-11-29 05:25:48.280165987 +0000 UTC m=+0.061858412 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 00:25:48 np0005539482 python3.9[229836]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:49 np0005539482 python3.9[229988]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:49 np0005539482 python3.9[230140]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:50 np0005539482 python3.9[230292]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:25:51 np0005539482 python3.9[230444]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:25:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:52 np0005539482 python3.9[230598]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:53 np0005539482 python3.9[230750]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:25:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:54 np0005539482 python3.9[230902]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:25:54 np0005539482 python3.9[230980]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.804976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954805015, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1834, "num_deletes": 250, "total_data_size": 3088085, "memory_usage": 3121256, "flush_reason": "Manual Compaction"}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954821004, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1737857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11741, "largest_seqno": 13574, "table_properties": {"data_size": 1731941, "index_size": 2991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14860, "raw_average_key_size": 20, "raw_value_size": 1718827, "raw_average_value_size": 2325, "num_data_blocks": 139, "num_entries": 739, "num_filter_entries": 739, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393744, "oldest_key_time": 1764393744, "file_creation_time": 1764393954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 16098 microseconds, and 4648 cpu microseconds.
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.821072) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1737857 bytes OK
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.821098) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.822641) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.822661) EVENT_LOG_v1 {"time_micros": 1764393954822654, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.822685) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3080349, prev total WAL file size 3080349, number of live WAL files 2.
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.823974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1697KB)], [29(7723KB)]
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954824062, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9646234, "oldest_snapshot_seqno": -1}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4019 keys, 7647896 bytes, temperature: kUnknown
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954883831, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7647896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7619189, "index_size": 17589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95532, "raw_average_key_size": 23, "raw_value_size": 7544852, "raw_average_value_size": 1877, "num_data_blocks": 767, "num_entries": 4019, "num_filter_entries": 4019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.884142) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7647896 bytes
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.885716) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.2 rd, 127.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.0) write-amplify(4.4) OK, records in: 4432, records dropped: 413 output_compression: NoCompression
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.885747) EVENT_LOG_v1 {"time_micros": 1764393954885732, "job": 12, "event": "compaction_finished", "compaction_time_micros": 59850, "compaction_time_cpu_micros": 34024, "output_level": 6, "num_output_files": 1, "total_output_size": 7647896, "num_input_records": 4432, "num_output_records": 4019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954886513, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954889488, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.823867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:25:54 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:25:55 np0005539482 python3.9[231132]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:25:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:56 np0005539482 python3.9[231210]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:25:56 np0005539482 python3.9[231362]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:57 np0005539482 python3.9[231514]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:25:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:58 np0005539482 python3.9[231592]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:58 np0005539482 python3.9[231744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:25:59 np0005539482 python3.9[231822]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:25:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:25:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:00 np0005539482 python3.9[231974]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:00 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:00 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:00 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:01 np0005539482 python3.9[232162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:26:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:01 np0005539482 python3.9[232240]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:02 np0005539482 python3.9[232392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:26:03 np0005539482 python3.9[232470]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:04 np0005539482 python3.9[232622]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:04 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:04 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:04 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:04 np0005539482 systemd[1]: Starting Create netns directory...
Nov 29 00:26:04 np0005539482 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 00:26:04 np0005539482 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 00:26:04 np0005539482 systemd[1]: Finished Create netns directory.
Nov 29 00:26:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:05 np0005539482 python3.9[232815]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:26:06 np0005539482 python3.9[232967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:26:07 np0005539482 python3.9[233090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393966.1677082-437-33971305157331/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:26:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:08 np0005539482 podman[233115]: 2025-11-29 05:26:08.101707238 +0000 UTC m=+0.154229494 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 00:26:08 np0005539482 python3.9[233268]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:26:09 np0005539482 python3.9[233420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:26:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:10 np0005539482 python3.9[233543]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393968.8243692-462-43787007799434/.source.json _original_basename=.1ghertuh follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:10 np0005539482 python3.9[233695]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:26:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:26:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:26:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:26:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:26:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:26:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:13 np0005539482 python3.9[234122]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 00:26:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:26:13.736 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:26:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:26:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:26:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:26:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:26:14 np0005539482 python3.9[234274]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 00:26:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:15 np0005539482 python3.9[234426]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 00:26:15 np0005539482 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 00:26:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:16 np0005539482 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 00:26:17 np0005539482 python3[234605]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 00:26:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:18 np0005539482 podman[234622]: 2025-11-29 05:26:18.539586826 +0000 UTC m=+1.319150862 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 00:26:18 np0005539482 podman[234656]: 2025-11-29 05:26:18.546548746 +0000 UTC m=+0.166899165 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 00:26:18 np0005539482 podman[234699]: 2025-11-29 05:26:18.738483124 +0000 UTC m=+0.077902769 container create 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:26:18 np0005539482 podman[234699]: 2025-11-29 05:26:18.702911508 +0000 UTC m=+0.042331203 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 00:26:18 np0005539482 python3[234605]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 00:26:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:19 np0005539482 python3.9[234890]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:26:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:20 np0005539482 python3.9[235044]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:21 np0005539482 python3.9[235120]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:26:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:22 np0005539482 python3.9[235271]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393981.2333207-550-266962473204191/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:22 np0005539482 python3.9[235347]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:26:22 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:22 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:22 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:23 np0005539482 python3.9[235458]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:24 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:24 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:24 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:25 np0005539482 systemd[1]: Starting multipathd container...
Nov 29 00:26:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:25 np0005539482 systemd[1]: Started /usr/bin/podman healthcheck run 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.
Nov 29 00:26:25 np0005539482 podman[235498]: 2025-11-29 05:26:25.335603741 +0000 UTC m=+0.173763033 container init 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:26:25 np0005539482 multipathd[235513]: + sudo -E kolla_set_configs
Nov 29 00:26:25 np0005539482 podman[235498]: 2025-11-29 05:26:25.365364934 +0000 UTC m=+0.203524196 container start 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 00:26:25 np0005539482 podman[235498]: multipathd
Nov 29 00:26:25 np0005539482 systemd[1]: Started multipathd container.
Nov 29 00:26:25 np0005539482 multipathd[235513]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 00:26:25 np0005539482 multipathd[235513]: INFO:__main__:Validating config file
Nov 29 00:26:25 np0005539482 multipathd[235513]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 00:26:25 np0005539482 multipathd[235513]: INFO:__main__:Writing out command to execute
Nov 29 00:26:25 np0005539482 multipathd[235513]: ++ cat /run_command
Nov 29 00:26:25 np0005539482 multipathd[235513]: + CMD='/usr/sbin/multipathd -d'
Nov 29 00:26:25 np0005539482 multipathd[235513]: + ARGS=
Nov 29 00:26:25 np0005539482 multipathd[235513]: + sudo kolla_copy_cacerts
Nov 29 00:26:25 np0005539482 multipathd[235513]: + [[ ! -n '' ]]
Nov 29 00:26:25 np0005539482 multipathd[235513]: + . kolla_extend_start
Nov 29 00:26:25 np0005539482 multipathd[235513]: Running command: '/usr/sbin/multipathd -d'
Nov 29 00:26:25 np0005539482 multipathd[235513]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 00:26:25 np0005539482 multipathd[235513]: + umask 0022
Nov 29 00:26:25 np0005539482 multipathd[235513]: + exec /usr/sbin/multipathd -d
Nov 29 00:26:25 np0005539482 podman[235519]: 2025-11-29 05:26:25.462928315 +0000 UTC m=+0.076802315 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:26:25 np0005539482 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-44f10734ebac7d5c.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 00:26:25 np0005539482 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-44f10734ebac7d5c.service: Failed with result 'exit-code'.
Nov 29 00:26:25 np0005539482 multipathd[235513]: 3098.125926 | --------start up--------
Nov 29 00:26:25 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:26:25 np0005539482 multipathd[235513]: 3098.125942 | read /etc/multipath.conf
Nov 29 00:26:25 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:26:25 np0005539482 multipathd[235513]: 3098.133128 | path checkers start up
Nov 29 00:26:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:26 np0005539482 python3.9[235702]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:26:27 np0005539482 python3.9[235856]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:26:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:28 np0005539482 python3.9[236021]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:26:28 np0005539482 systemd[1]: Stopping multipathd container...
Nov 29 00:26:28 np0005539482 multipathd[235513]: 3100.841460 | exit (signal)
Nov 29 00:26:28 np0005539482 multipathd[235513]: 3100.841569 | --------shut down-------
Nov 29 00:26:28 np0005539482 systemd[1]: libpod-48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.scope: Deactivated successfully.
Nov 29 00:26:28 np0005539482 podman[236025]: 2025-11-29 05:26:28.209671856 +0000 UTC m=+0.074040283 container died 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 00:26:28 np0005539482 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-44f10734ebac7d5c.timer: Deactivated successfully.
Nov 29 00:26:28 np0005539482 systemd[1]: Stopped /usr/bin/podman healthcheck run 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.
Nov 29 00:26:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-userdata-shm.mount: Deactivated successfully.
Nov 29 00:26:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay-94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88-merged.mount: Deactivated successfully.
Nov 29 00:26:28 np0005539482 podman[236025]: 2025-11-29 05:26:28.48155623 +0000 UTC m=+0.345924657 container cleanup 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 00:26:28 np0005539482 podman[236025]: multipathd
Nov 29 00:26:28 np0005539482 podman[236052]: multipathd
Nov 29 00:26:28 np0005539482 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 00:26:28 np0005539482 systemd[1]: Stopped multipathd container.
Nov 29 00:26:28 np0005539482 systemd[1]: Starting multipathd container...
Nov 29 00:26:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:28 np0005539482 systemd[1]: Started /usr/bin/podman healthcheck run 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.
Nov 29 00:26:28 np0005539482 podman[236065]: 2025-11-29 05:26:28.724208783 +0000 UTC m=+0.143525213 container init 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 00:26:28 np0005539482 multipathd[236080]: + sudo -E kolla_set_configs
Nov 29 00:26:28 np0005539482 podman[236065]: 2025-11-29 05:26:28.761088849 +0000 UTC m=+0.180405239 container start 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 00:26:28 np0005539482 podman[236065]: multipathd
Nov 29 00:26:28 np0005539482 systemd[1]: Started multipathd container.
Nov 29 00:26:28 np0005539482 multipathd[236080]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 00:26:28 np0005539482 multipathd[236080]: INFO:__main__:Validating config file
Nov 29 00:26:28 np0005539482 multipathd[236080]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 00:26:28 np0005539482 multipathd[236080]: INFO:__main__:Writing out command to execute
Nov 29 00:26:28 np0005539482 multipathd[236080]: ++ cat /run_command
Nov 29 00:26:28 np0005539482 multipathd[236080]: + CMD='/usr/sbin/multipathd -d'
Nov 29 00:26:28 np0005539482 multipathd[236080]: + ARGS=
Nov 29 00:26:28 np0005539482 multipathd[236080]: + sudo kolla_copy_cacerts
Nov 29 00:26:28 np0005539482 multipathd[236080]: + [[ ! -n '' ]]
Nov 29 00:26:28 np0005539482 multipathd[236080]: + . kolla_extend_start
Nov 29 00:26:28 np0005539482 multipathd[236080]: Running command: '/usr/sbin/multipathd -d'
Nov 29 00:26:28 np0005539482 multipathd[236080]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 00:26:28 np0005539482 multipathd[236080]: + umask 0022
Nov 29 00:26:28 np0005539482 multipathd[236080]: + exec /usr/sbin/multipathd -d
Nov 29 00:26:28 np0005539482 podman[236087]: 2025-11-29 05:26:28.868337309 +0000 UTC m=+0.086765902 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 00:26:28 np0005539482 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-2455cf12635f5daf.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 00:26:28 np0005539482 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-2455cf12635f5daf.service: Failed with result 'exit-code'.
Nov 29 00:26:28 np0005539482 multipathd[236080]: 3101.537524 | --------start up--------
Nov 29 00:26:28 np0005539482 multipathd[236080]: 3101.537542 | read /etc/multipath.conf
Nov 29 00:26:28 np0005539482 multipathd[236080]: 3101.544582 | path checkers start up
Nov 29 00:26:28 np0005539482 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 00:26:28 np0005539482 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 00:26:29 np0005539482 python3.9[236274]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:30 np0005539482 python3.9[236426]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 00:26:31 np0005539482 python3.9[236578]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 00:26:31 np0005539482 kernel: Key type psk registered
Nov 29 00:26:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:32 np0005539482 python3.9[236739]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:26:32 np0005539482 python3.9[236885]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393991.6954017-630-219166139847673/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:26:33 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 291e8600-3135-47de-ac76-2f4364410b03 does not exist
Nov 29 00:26:33 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev abb25670-3898-4c4e-a885-0da21437ec03 does not exist
Nov 29 00:26:33 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev b125312b-0666-45b7-8a58-733e8df32a2b does not exist
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:26:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:33 np0005539482 python3.9[237145]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:26:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:26:34 np0005539482 podman[237384]: 2025-11-29 05:26:34.270357686 +0000 UTC m=+0.051509214 container create ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:26:34 np0005539482 systemd[1]: Started libpod-conmon-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope.
Nov 29 00:26:34 np0005539482 podman[237384]: 2025-11-29 05:26:34.24953515 +0000 UTC m=+0.030686708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:26:34 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:34 np0005539482 podman[237384]: 2025-11-29 05:26:34.361109404 +0000 UTC m=+0.142260962 container init ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:26:34 np0005539482 podman[237384]: 2025-11-29 05:26:34.375242728 +0000 UTC m=+0.156394286 container start ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 00:26:34 np0005539482 podman[237384]: 2025-11-29 05:26:34.380458765 +0000 UTC m=+0.161610323 container attach ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:26:34 np0005539482 goofy_carson[237437]: 167 167
Nov 29 00:26:34 np0005539482 systemd[1]: libpod-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope: Deactivated successfully.
Nov 29 00:26:34 np0005539482 conmon[237437]: conmon ac970f02bf711da42730 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope/container/memory.events
Nov 29 00:26:34 np0005539482 podman[237384]: 2025-11-29 05:26:34.383841807 +0000 UTC m=+0.164993355 container died ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:26:34 np0005539482 systemd[1]: var-lib-containers-storage-overlay-834e37d757161bbdb7685c0a9f939bff050cc45269cfbacac8d6d2dbe0cd62f2-merged.mount: Deactivated successfully.
Nov 29 00:26:34 np0005539482 podman[237384]: 2025-11-29 05:26:34.422337324 +0000 UTC m=+0.203488842 container remove ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:26:34 np0005539482 systemd[1]: libpod-conmon-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope: Deactivated successfully.
Nov 29 00:26:34 np0005539482 podman[237478]: 2025-11-29 05:26:34.618800933 +0000 UTC m=+0.057256424 container create d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:26:34 np0005539482 systemd[1]: Started libpod-conmon-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope.
Nov 29 00:26:34 np0005539482 podman[237478]: 2025-11-29 05:26:34.590569007 +0000 UTC m=+0.029024528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:26:34 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:34 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:34 np0005539482 python3.9[237457]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:26:34 np0005539482 podman[237478]: 2025-11-29 05:26:34.737179524 +0000 UTC m=+0.175635045 container init d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:26:34 np0005539482 podman[237478]: 2025-11-29 05:26:34.745442515 +0000 UTC m=+0.183898046 container start d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:26:34 np0005539482 podman[237478]: 2025-11-29 05:26:34.750339603 +0000 UTC m=+0.188795134 container attach d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:26:34 np0005539482 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 00:26:34 np0005539482 systemd[1]: Stopped Load Kernel Modules.
Nov 29 00:26:34 np0005539482 systemd[1]: Stopping Load Kernel Modules...
Nov 29 00:26:34 np0005539482 systemd[1]: Starting Load Kernel Modules...
Nov 29 00:26:34 np0005539482 systemd[1]: Finished Load Kernel Modules.
Nov 29 00:26:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:35 np0005539482 python3.9[237661]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 00:26:35 np0005539482 jolly_kare[237495]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:26:35 np0005539482 jolly_kare[237495]: --> relative data size: 1.0
Nov 29 00:26:35 np0005539482 jolly_kare[237495]: --> All data devices are unavailable
Nov 29 00:26:35 np0005539482 systemd[1]: libpod-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope: Deactivated successfully.
Nov 29 00:26:35 np0005539482 systemd[1]: libpod-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope: Consumed 1.055s CPU time.
Nov 29 00:26:35 np0005539482 podman[237478]: 2025-11-29 05:26:35.889040485 +0000 UTC m=+1.327496006 container died d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:26:35 np0005539482 systemd[1]: var-lib-containers-storage-overlay-68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc-merged.mount: Deactivated successfully.
Nov 29 00:26:35 np0005539482 podman[237478]: 2025-11-29 05:26:35.957197633 +0000 UTC m=+1.395653114 container remove d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:26:35 np0005539482 systemd[1]: libpod-conmon-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope: Deactivated successfully.
Nov 29 00:26:36 np0005539482 podman[237836]: 2025-11-29 05:26:36.589802603 +0000 UTC m=+0.065894434 container create 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:26:36 np0005539482 systemd[1]: Started libpod-conmon-5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0.scope.
Nov 29 00:26:36 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:36 np0005539482 podman[237836]: 2025-11-29 05:26:36.560455329 +0000 UTC m=+0.036547210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:26:36 np0005539482 podman[237836]: 2025-11-29 05:26:36.666798836 +0000 UTC m=+0.142890667 container init 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:26:36 np0005539482 podman[237836]: 2025-11-29 05:26:36.674613916 +0000 UTC m=+0.150705717 container start 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:26:36 np0005539482 podman[237836]: 2025-11-29 05:26:36.677486646 +0000 UTC m=+0.153578497 container attach 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:26:36 np0005539482 hopeful_austin[237853]: 167 167
Nov 29 00:26:36 np0005539482 systemd[1]: libpod-5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0.scope: Deactivated successfully.
Nov 29 00:26:36 np0005539482 podman[237836]: 2025-11-29 05:26:36.681897994 +0000 UTC m=+0.157989795 container died 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:26:36 np0005539482 systemd[1]: var-lib-containers-storage-overlay-342d5e87cd5202bff57454d408f2078ce490b107c84f935e125d8c9d60d3ada6-merged.mount: Deactivated successfully.
Nov 29 00:26:36 np0005539482 podman[237836]: 2025-11-29 05:26:36.717470469 +0000 UTC m=+0.193562270 container remove 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:26:36 np0005539482 systemd[1]: libpod-conmon-5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0.scope: Deactivated successfully.
Nov 29 00:26:36 np0005539482 podman[237876]: 2025-11-29 05:26:36.959436285 +0000 UTC m=+0.070935747 container create 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:26:37 np0005539482 systemd[1]: Started libpod-conmon-91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059.scope.
Nov 29 00:26:37 np0005539482 podman[237876]: 2025-11-29 05:26:36.927182871 +0000 UTC m=+0.038682343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:26:37 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:37 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:37 np0005539482 podman[237876]: 2025-11-29 05:26:37.075054878 +0000 UTC m=+0.186554350 container init 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:26:37 np0005539482 podman[237876]: 2025-11-29 05:26:37.096361587 +0000 UTC m=+0.207861019 container start 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:26:37 np0005539482 podman[237876]: 2025-11-29 05:26:37.1002489 +0000 UTC m=+0.211748362 container attach 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:26:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]: {
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:    "0": [
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:        {
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "devices": [
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "/dev/loop3"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            ],
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_name": "ceph_lv0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_size": "21470642176",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "name": "ceph_lv0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "tags": {
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cluster_name": "ceph",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.crush_device_class": "",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.encrypted": "0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osd_id": "0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.type": "block",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.vdo": "0"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            },
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "type": "block",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "vg_name": "ceph_vg0"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:        }
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:    ],
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:    "1": [
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:        {
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "devices": [
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "/dev/loop4"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            ],
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_name": "ceph_lv1",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_size": "21470642176",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "name": "ceph_lv1",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "tags": {
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cluster_name": "ceph",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.crush_device_class": "",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.encrypted": "0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osd_id": "1",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.type": "block",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.vdo": "0"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            },
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "type": "block",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "vg_name": "ceph_vg1"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:        }
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:    ],
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:    "2": [
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:        {
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "devices": [
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "/dev/loop5"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            ],
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_name": "ceph_lv2",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_size": "21470642176",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "name": "ceph_lv2",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "tags": {
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.cluster_name": "ceph",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.crush_device_class": "",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.encrypted": "0",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osd_id": "2",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.type": "block",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:                "ceph.vdo": "0"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            },
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "type": "block",
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:            "vg_name": "ceph_vg2"
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:        }
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]:    ]
Nov 29 00:26:37 np0005539482 wonderful_vaughan[237893]: }
Nov 29 00:26:37 np0005539482 systemd[1]: libpod-91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059.scope: Deactivated successfully.
Nov 29 00:26:37 np0005539482 podman[237876]: 2025-11-29 05:26:37.814810195 +0000 UTC m=+0.926309617 container died 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:26:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45-merged.mount: Deactivated successfully.
Nov 29 00:26:37 np0005539482 podman[237876]: 2025-11-29 05:26:37.872876657 +0000 UTC m=+0.984376079 container remove 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:26:37 np0005539482 systemd[1]: libpod-conmon-91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059.scope: Deactivated successfully.
Nov 29 00:26:38 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:38 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:38 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:38 np0005539482 podman[237993]: 2025-11-29 05:26:38.317309889 +0000 UTC m=+0.154597892 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:26:38 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:38 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:38 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:38 np0005539482 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 00:26:39 np0005539482 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 00:26:39 np0005539482 lvm[238152]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 00:26:39 np0005539482 lvm[238152]: VG ceph_vg0 finished
Nov 29 00:26:39 np0005539482 lvm[238153]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 00:26:39 np0005539482 lvm[238153]: VG ceph_vg1 finished
Nov 29 00:26:39 np0005539482 lvm[238154]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 00:26:39 np0005539482 lvm[238154]: VG ceph_vg2 finished
Nov 29 00:26:39 np0005539482 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:26:39 np0005539482 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:26:39 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:39 np0005539482 podman[238216]: 2025-11-29 05:26:39.280490671 +0000 UTC m=+0.039985354 container create 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:26:39 np0005539482 podman[238216]: 2025-11-29 05:26:39.261531439 +0000 UTC m=+0.021026172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:26:39 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:39 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:39 np0005539482 systemd[1]: Started libpod-conmon-4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc.scope.
Nov 29 00:26:39 np0005539482 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:26:39 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:39 np0005539482 podman[238216]: 2025-11-29 05:26:39.683857154 +0000 UTC m=+0.443351847 container init 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 00:26:39 np0005539482 podman[238216]: 2025-11-29 05:26:39.691483989 +0000 UTC m=+0.450978682 container start 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:26:39 np0005539482 podman[238216]: 2025-11-29 05:26:39.694763938 +0000 UTC m=+0.454258641 container attach 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:26:39 np0005539482 adoring_panini[238459]: 167 167
Nov 29 00:26:39 np0005539482 systemd[1]: libpod-4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc.scope: Deactivated successfully.
Nov 29 00:26:39 np0005539482 podman[238216]: 2025-11-29 05:26:39.699129004 +0000 UTC m=+0.458623707 container died 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:26:39 np0005539482 systemd[1]: var-lib-containers-storage-overlay-24725f370454177cb46c91d79f97e8cd7a6d0deb245ff8c2aa64a3cdbb1618bb-merged.mount: Deactivated successfully.
Nov 29 00:26:39 np0005539482 podman[238216]: 2025-11-29 05:26:39.738061665 +0000 UTC m=+0.497556348 container remove 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:26:39 np0005539482 systemd[1]: libpod-conmon-4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc.scope: Deactivated successfully.
Nov 29 00:26:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:39 np0005539482 podman[238713]: 2025-11-29 05:26:39.922531385 +0000 UTC m=+0.058048654 container create 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:26:39 np0005539482 systemd[1]: Started libpod-conmon-1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54.scope.
Nov 29 00:26:39 np0005539482 podman[238713]: 2025-11-29 05:26:39.896153877 +0000 UTC m=+0.031671236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:26:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:26:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:26:40 np0005539482 podman[238713]: 2025-11-29 05:26:40.030217228 +0000 UTC m=+0.165734497 container init 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:26:40 np0005539482 podman[238713]: 2025-11-29 05:26:40.037391701 +0000 UTC m=+0.172908970 container start 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:26:40 np0005539482 podman[238713]: 2025-11-29 05:26:40.045559619 +0000 UTC m=+0.181076888 container attach 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:26:40 np0005539482 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:26:40 np0005539482 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:26:40 np0005539482 systemd[1]: man-db-cache-update.service: Consumed 1.629s CPU time.
Nov 29 00:26:40 np0005539482 systemd[1]: run-rd2df37efd9a94adb825097f8ce549af6.service: Deactivated successfully.
Nov 29 00:26:40 np0005539482 python3.9[239594]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]: {
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "osd_id": 0,
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "type": "bluestore"
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:    },
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "osd_id": 1,
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "type": "bluestore"
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:    },
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "osd_id": 2,
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:        "type": "bluestore"
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]:    }
Nov 29 00:26:40 np0005539482 gracious_hodgkin[238834]: }
Nov 29 00:26:40 np0005539482 systemd[1]: Stopping Open-iSCSI...
Nov 29 00:26:40 np0005539482 iscsid[226839]: iscsid shutting down.
Nov 29 00:26:40 np0005539482 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 00:26:40 np0005539482 systemd[1]: Stopped Open-iSCSI.
Nov 29 00:26:40 np0005539482 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 00:26:40 np0005539482 podman[238713]: 2025-11-29 05:26:40.981964626 +0000 UTC m=+1.117481895 container died 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:26:40 np0005539482 systemd[1]: Starting Open-iSCSI...
Nov 29 00:26:40 np0005539482 systemd[1]: libpod-1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54.scope: Deactivated successfully.
Nov 29 00:26:41 np0005539482 systemd[1]: var-lib-containers-storage-overlay-51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055-merged.mount: Deactivated successfully.
Nov 29 00:26:41 np0005539482 systemd[1]: Started Open-iSCSI.
Nov 29 00:26:41 np0005539482 podman[238713]: 2025-11-29 05:26:41.034423404 +0000 UTC m=+1.169940673 container remove 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:26:41 np0005539482 systemd[1]: libpod-conmon-1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54.scope: Deactivated successfully.
Nov 29 00:26:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:26:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:26:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:26:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 379ce5a1-ca6a-476f-9714-0afde0dd280e does not exist
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d05f0423-3ca4-4652-a67c-137a9896c30c does not exist
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:26:41
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data']
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:26:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:41 np0005539482 python3.9[239836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:26:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:26:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:26:43 np0005539482 python3.9[239994]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:44 np0005539482 python3.9[240146]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:26:44 np0005539482 systemd[1]: Reloading.
Nov 29 00:26:44 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:26:44 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:26:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:45 np0005539482 python3.9[240331]: ansible-ansible.builtin.service_facts Invoked
Nov 29 00:26:45 np0005539482 network[240348]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 00:26:45 np0005539482 network[240349]: 'network-scripts' will be removed from distribution in near future.
Nov 29 00:26:45 np0005539482 network[240350]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 00:26:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:48 np0005539482 podman[240444]: 2025-11-29 05:26:48.707115001 +0000 UTC m=+0.097715134 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:26:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:50 np0005539482 python3.9[240644]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:51 np0005539482 python3.9[240797]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:26:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:51 np0005539482 python3.9[240950]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:52 np0005539482 python3.9[241103]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:53 np0005539482 python3.9[241256]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:54 np0005539482 python3.9[241409]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:26:55 np0005539482 python3.9[241562]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:56 np0005539482 python3.9[241715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:26:57 np0005539482 python3.9[241868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:58 np0005539482 python3.9[242020]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:58 np0005539482 python3.9[242172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:59 np0005539482 podman[242173]: 2025-11-29 05:26:59.013492826 +0000 UTC m=+0.068524878 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 00:26:59 np0005539482 python3.9[242344]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:26:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:26:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:00 np0005539482 python3.9[242496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:00 np0005539482 python3.9[242648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:01 np0005539482 python3.9[242802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:02 np0005539482 python3.9[242954]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:02 np0005539482 python3.9[243106]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:03 np0005539482 python3.9[243258]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:04 np0005539482 python3.9[243410]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:05 np0005539482 python3.9[243562]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:05 np0005539482 python3.9[243714]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:06 np0005539482 python3.9[243866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:07 np0005539482 python3.9[244018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:08 np0005539482 python3.9[244171]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:08 np0005539482 podman[244295]: 2025-11-29 05:27:08.796452517 +0000 UTC m=+0.099260770 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 00:27:08 np0005539482 python3.9[244346]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:10 np0005539482 python3.9[244502]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 00:27:10 np0005539482 python3.9[244654]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:27:10 np0005539482 systemd[1]: Reloading.
Nov 29 00:27:11 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:27:11 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:27:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:27:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:27:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:27:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:27:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:27:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:27:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:12 np0005539482 python3.9[244841]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:12 np0005539482 python3.9[244994]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:27:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:27:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:27:13.739 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:27:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:27:13.739 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:27:13 np0005539482 python3.9[245147]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:14 np0005539482 python3.9[245300]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:15 np0005539482 python3.9[245453]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:15 np0005539482 python3.9[245606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:16 np0005539482 python3.9[245759]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:17 np0005539482 python3.9[245912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:27:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:18 np0005539482 podman[246037]: 2025-11-29 05:27:18.95625304 +0000 UTC m=+0.098260387 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 00:27:19 np0005539482 python3.9[246083]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:20 np0005539482 python3.9[246236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:20 np0005539482 python3.9[246388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:21 np0005539482 python3.9[246540]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:22 np0005539482 python3.9[246692]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:22 np0005539482 python3.9[246844]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:23 np0005539482 python3.9[246998]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:24 np0005539482 python3.9[247152]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:25 np0005539482 python3.9[247304]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:26 np0005539482 python3.9[247456]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:30 np0005539482 podman[247483]: 2025-11-29 05:27:30.049635741 +0000 UTC m=+0.101696970 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:27:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:31 np0005539482 python3.9[247627]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 00:27:32 np0005539482 python3.9[247780]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 00:27:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:33 np0005539482 python3.9[247938]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.835877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054835934, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1252, "num_deletes": 505, "total_data_size": 1470240, "memory_usage": 1499984, "flush_reason": "Manual Compaction"}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054853014, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1456348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13575, "largest_seqno": 14826, "table_properties": {"data_size": 1450796, "index_size": 2500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14069, "raw_average_key_size": 17, "raw_value_size": 1437755, "raw_average_value_size": 1826, "num_data_blocks": 114, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393955, "oldest_key_time": 1764393955, "file_creation_time": 1764394054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 17189 microseconds, and 7705 cpu microseconds.
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.853072) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1456348 bytes OK
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.853099) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855126) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855171) EVENT_LOG_v1 {"time_micros": 1764394054855161, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855194) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1463515, prev total WAL file size 1463515, number of live WAL files 2.
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1422KB)], [32(7468KB)]
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054855991, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9104244, "oldest_snapshot_seqno": -1}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3783 keys, 7161207 bytes, temperature: kUnknown
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054918098, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7161207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7134134, "index_size": 16531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92688, "raw_average_key_size": 24, "raw_value_size": 7063844, "raw_average_value_size": 1867, "num_data_blocks": 701, "num_entries": 3783, "num_filter_entries": 3783, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.918319) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7161207 bytes
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.919806) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.5 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 4806, records dropped: 1023 output_compression: NoCompression
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.919837) EVENT_LOG_v1 {"time_micros": 1764394054919813, "job": 14, "event": "compaction_finished", "compaction_time_micros": 62156, "compaction_time_cpu_micros": 33145, "output_level": 6, "num_output_files": 1, "total_output_size": 7161207, "num_input_records": 4806, "num_output_records": 3783, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054920164, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054922490, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:27:34 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:27:34 np0005539482 systemd-logind[793]: New session 50 of user zuul.
Nov 29 00:27:34 np0005539482 systemd[1]: Started Session 50 of User zuul.
Nov 29 00:27:35 np0005539482 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 00:27:35 np0005539482 systemd-logind[793]: Session 50 logged out. Waiting for processes to exit.
Nov 29 00:27:35 np0005539482 systemd-logind[793]: Removed session 50.
Nov 29 00:27:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:35 np0005539482 python3.9[248124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:36 np0005539482 python3.9[248245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394055.3405144-1249-256034503309412/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:37 np0005539482 python3.9[248395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:37 np0005539482 python3.9[248471]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:38 np0005539482 python3.9[248621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:39 np0005539482 podman[248692]: 2025-11-29 05:27:39.06065182 +0000 UTC m=+0.109751994 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 00:27:39 np0005539482 python3.9[248768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394058.057398-1249-116968742759637/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:40 np0005539482 python3.9[248918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:40 np0005539482 python3.9[249039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394059.4375298-1249-224288687081221/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:27:41
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr']
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:27:41 np0005539482 python3.9[249256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:27:42 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev c1e97b11-93db-4f0b-b7e5-a78e881454c1 does not exist
Nov 29 00:27:42 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f405689a-a342-4382-bba0-4224ba2e4503 does not exist
Nov 29 00:27:42 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 85ce1be6-5335-4867-a5e8-3b03b25fc6a8 does not exist
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:27:42 np0005539482 python3.9[249442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394061.0879936-1249-252044331660310/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:42 np0005539482 podman[249698]: 2025-11-29 05:27:42.63004865 +0000 UTC m=+0.066399147 container create 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:27:42 np0005539482 systemd[1]: Started libpod-conmon-5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72.scope.
Nov 29 00:27:42 np0005539482 podman[249698]: 2025-11-29 05:27:42.601223063 +0000 UTC m=+0.037573620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:27:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:27:42 np0005539482 podman[249698]: 2025-11-29 05:27:42.724034412 +0000 UTC m=+0.160384919 container init 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:27:42 np0005539482 podman[249698]: 2025-11-29 05:27:42.732025165 +0000 UTC m=+0.168375642 container start 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:27:42 np0005539482 podman[249698]: 2025-11-29 05:27:42.735508619 +0000 UTC m=+0.171859096 container attach 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:27:42 np0005539482 objective_germain[249748]: 167 167
Nov 29 00:27:42 np0005539482 podman[249698]: 2025-11-29 05:27:42.738554892 +0000 UTC m=+0.174905359 container died 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:27:42 np0005539482 systemd[1]: libpod-5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72.scope: Deactivated successfully.
Nov 29 00:27:42 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6bfa14f56164b59930fb2e5730afa5a7d9bb7b24e99fb8f4facf52591b966bcf-merged.mount: Deactivated successfully.
Nov 29 00:27:42 np0005539482 podman[249698]: 2025-11-29 05:27:42.779853101 +0000 UTC m=+0.216203558 container remove 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:27:42 np0005539482 systemd[1]: libpod-conmon-5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72.scope: Deactivated successfully.
Nov 29 00:27:42 np0005539482 python3.9[249745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:42 np0005539482 podman[249778]: 2025-11-29 05:27:42.949726788 +0000 UTC m=+0.043479152 container create ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:27:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:27:42 np0005539482 systemd[1]: Started libpod-conmon-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope.
Nov 29 00:27:43 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:27:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:43 np0005539482 podman[249778]: 2025-11-29 05:27:42.935668618 +0000 UTC m=+0.029421012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:27:43 np0005539482 podman[249778]: 2025-11-29 05:27:43.055024923 +0000 UTC m=+0.148777297 container init ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:27:43 np0005539482 podman[249778]: 2025-11-29 05:27:43.06649342 +0000 UTC m=+0.160245784 container start ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:27:43 np0005539482 podman[249778]: 2025-11-29 05:27:43.069967695 +0000 UTC m=+0.163720119 container attach ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:27:43 np0005539482 python3.9[249913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394062.3033223-1249-56338350063723/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:44 np0005539482 exciting_rubin[249834]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:27:44 np0005539482 exciting_rubin[249834]: --> relative data size: 1.0
Nov 29 00:27:44 np0005539482 exciting_rubin[249834]: --> All data devices are unavailable
Nov 29 00:27:44 np0005539482 systemd[1]: libpod-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope: Deactivated successfully.
Nov 29 00:27:44 np0005539482 systemd[1]: libpod-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope: Consumed 1.079s CPU time.
Nov 29 00:27:44 np0005539482 podman[249778]: 2025-11-29 05:27:44.211702145 +0000 UTC m=+1.305454549 container died ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:27:44 np0005539482 systemd[1]: var-lib-containers-storage-overlay-eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac-merged.mount: Deactivated successfully.
Nov 29 00:27:44 np0005539482 podman[249778]: 2025-11-29 05:27:44.289071557 +0000 UTC m=+1.382823931 container remove ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:27:44 np0005539482 systemd[1]: libpod-conmon-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope: Deactivated successfully.
Nov 29 00:27:44 np0005539482 python3.9[250085]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:45 np0005539482 podman[250397]: 2025-11-29 05:27:45.012032564 +0000 UTC m=+0.042848138 container create 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:27:45 np0005539482 systemd[1]: Started libpod-conmon-8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c.scope.
Nov 29 00:27:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:27:45 np0005539482 podman[250397]: 2025-11-29 05:27:45.089188719 +0000 UTC m=+0.120004373 container init 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 00:27:45 np0005539482 podman[250397]: 2025-11-29 05:27:44.995524414 +0000 UTC m=+0.026340008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:27:45 np0005539482 podman[250397]: 2025-11-29 05:27:45.095383818 +0000 UTC m=+0.126199392 container start 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:27:45 np0005539482 podman[250397]: 2025-11-29 05:27:45.098057553 +0000 UTC m=+0.128873217 container attach 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:27:45 np0005539482 eager_bouman[250413]: 167 167
Nov 29 00:27:45 np0005539482 systemd[1]: libpod-8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c.scope: Deactivated successfully.
Nov 29 00:27:45 np0005539482 podman[250397]: 2025-11-29 05:27:45.100685697 +0000 UTC m=+0.131501291 container died 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:27:45 np0005539482 python3.9[250395]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:27:45 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4e59698ba24ba4753a25dcf3f4169eeb29eca7cbf325c3f6a12b9f9c1f32373d-merged.mount: Deactivated successfully.
Nov 29 00:27:45 np0005539482 podman[250397]: 2025-11-29 05:27:45.138071411 +0000 UTC m=+0.168886995 container remove 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:27:45 np0005539482 systemd[1]: libpod-conmon-8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c.scope: Deactivated successfully.
Nov 29 00:27:45 np0005539482 podman[250461]: 2025-11-29 05:27:45.293100649 +0000 UTC m=+0.043323859 container create 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:27:45 np0005539482 systemd[1]: Started libpod-conmon-41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb.scope.
Nov 29 00:27:45 np0005539482 podman[250461]: 2025-11-29 05:27:45.274991031 +0000 UTC m=+0.025214221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:27:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:27:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:45 np0005539482 podman[250461]: 2025-11-29 05:27:45.404484771 +0000 UTC m=+0.154708051 container init 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:27:45 np0005539482 podman[250461]: 2025-11-29 05:27:45.418558401 +0000 UTC m=+0.168781611 container start 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:27:45 np0005539482 podman[250461]: 2025-11-29 05:27:45.423211494 +0000 UTC m=+0.173434704 container attach 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:27:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:45 np0005539482 python3.9[250609]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]: {
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:    "0": [
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:        {
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "devices": [
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "/dev/loop3"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            ],
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_name": "ceph_lv0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_size": "21470642176",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "name": "ceph_lv0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "tags": {
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cluster_name": "ceph",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.crush_device_class": "",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.encrypted": "0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osd_id": "0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.type": "block",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.vdo": "0"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            },
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "type": "block",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "vg_name": "ceph_vg0"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:        }
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:    ],
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:    "1": [
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:        {
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "devices": [
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "/dev/loop4"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            ],
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_name": "ceph_lv1",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_size": "21470642176",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "name": "ceph_lv1",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "tags": {
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cluster_name": "ceph",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.crush_device_class": "",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.encrypted": "0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osd_id": "1",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.type": "block",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.vdo": "0"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            },
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "type": "block",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "vg_name": "ceph_vg1"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:        }
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:    ],
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:    "2": [
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:        {
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "devices": [
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "/dev/loop5"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            ],
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_name": "ceph_lv2",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_size": "21470642176",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "name": "ceph_lv2",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "tags": {
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.cluster_name": "ceph",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.crush_device_class": "",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.encrypted": "0",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osd_id": "2",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.type": "block",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:                "ceph.vdo": "0"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            },
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "type": "block",
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:            "vg_name": "ceph_vg2"
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:        }
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]:    ]
Nov 29 00:27:46 np0005539482 condescending_knuth[250500]: }
Nov 29 00:27:46 np0005539482 systemd[1]: libpod-41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb.scope: Deactivated successfully.
Nov 29 00:27:46 np0005539482 podman[250651]: 2025-11-29 05:27:46.226214786 +0000 UTC m=+0.026837939 container died 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:27:46 np0005539482 systemd[1]: var-lib-containers-storage-overlay-69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a-merged.mount: Deactivated successfully.
Nov 29 00:27:46 np0005539482 podman[250651]: 2025-11-29 05:27:46.290670174 +0000 UTC m=+0.091293317 container remove 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:27:46 np0005539482 systemd[1]: libpod-conmon-41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb.scope: Deactivated successfully.
Nov 29 00:27:46 np0005539482 python3.9[250853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:47 np0005539482 podman[250987]: 2025-11-29 05:27:47.113317032 +0000 UTC m=+0.070242999 container create 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 00:27:47 np0005539482 systemd[1]: Started libpod-conmon-9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68.scope.
Nov 29 00:27:47 np0005539482 podman[250987]: 2025-11-29 05:27:47.082074426 +0000 UTC m=+0.039000433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:27:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:27:47 np0005539482 podman[250987]: 2025-11-29 05:27:47.23155203 +0000 UTC m=+0.188477997 container init 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:27:47 np0005539482 podman[250987]: 2025-11-29 05:27:47.244804871 +0000 UTC m=+0.201730828 container start 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:27:47 np0005539482 podman[250987]: 2025-11-29 05:27:47.250213321 +0000 UTC m=+0.207139298 container attach 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:27:47 np0005539482 wizardly_feynman[251032]: 167 167
Nov 29 00:27:47 np0005539482 systemd[1]: libpod-9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68.scope: Deactivated successfully.
Nov 29 00:27:47 np0005539482 podman[250987]: 2025-11-29 05:27:47.253842009 +0000 UTC m=+0.210767936 container died 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:27:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9dfe395a33480fa2f65789d3010a63d750f6869dec0c0e19adb8388d1edd4e63-merged.mount: Deactivated successfully.
Nov 29 00:27:47 np0005539482 podman[250987]: 2025-11-29 05:27:47.29360597 +0000 UTC m=+0.250531927 container remove 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:27:47 np0005539482 systemd[1]: libpod-conmon-9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68.scope: Deactivated successfully.
Nov 29 00:27:47 np0005539482 python3.9[251077]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764394066.2151036-1356-241854222022565/.source _original_basename=.6k0tq8qg follow=False checksum=bf754058a6438a797db5195aacffe88f31464064 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 00:27:47 np0005539482 podman[251083]: 2025-11-29 05:27:47.544547997 +0000 UTC m=+0.071400317 container create 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:27:47 np0005539482 systemd[1]: Started libpod-conmon-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope.
Nov 29 00:27:47 np0005539482 podman[251083]: 2025-11-29 05:27:47.513565558 +0000 UTC m=+0.040417958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:27:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:27:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:27:47 np0005539482 podman[251083]: 2025-11-29 05:27:47.666846403 +0000 UTC m=+0.193698713 container init 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 00:27:47 np0005539482 podman[251083]: 2025-11-29 05:27:47.678420463 +0000 UTC m=+0.205272773 container start 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:27:47 np0005539482 podman[251083]: 2025-11-29 05:27:47.682181184 +0000 UTC m=+0.209033484 container attach 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:27:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:48 np0005539482 python3.9[251260]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]: {
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "osd_id": 0,
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "type": "bluestore"
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:    },
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "osd_id": 1,
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "type": "bluestore"
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:    },
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "osd_id": 2,
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:        "type": "bluestore"
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]:    }
Nov 29 00:27:48 np0005539482 optimistic_dijkstra[251102]: }
Nov 29 00:27:48 np0005539482 systemd[1]: libpod-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope: Deactivated successfully.
Nov 29 00:27:48 np0005539482 podman[251083]: 2025-11-29 05:27:48.758987066 +0000 UTC m=+1.285839406 container died 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:27:48 np0005539482 systemd[1]: libpod-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope: Consumed 1.088s CPU time.
Nov 29 00:27:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3-merged.mount: Deactivated successfully.
Nov 29 00:27:48 np0005539482 podman[251083]: 2025-11-29 05:27:48.831250433 +0000 UTC m=+1.358102753 container remove 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:27:48 np0005539482 systemd[1]: libpod-conmon-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope: Deactivated successfully.
Nov 29 00:27:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:27:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:27:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:27:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:27:48 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ae7626a2-fc45-4198-9949-e7f74803f0a6 does not exist
Nov 29 00:27:48 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d75533b1-82c9-465d-8dbb-adb9370ddc7b does not exist
Nov 29 00:27:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:27:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:27:49 np0005539482 podman[251447]: 2025-11-29 05:27:49.141982795 +0000 UTC m=+0.074004830 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 00:27:49 np0005539482 python3.9[251518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:50 np0005539482 python3.9[251639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394068.8717532-1382-59214853178857/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:51 np0005539482 python3.9[251789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:27:51 np0005539482 python3.9[251910]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394070.3757503-1397-195886511480855/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 00:27:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:52 np0005539482 python3.9[252062]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 00:27:53 np0005539482 python3.9[252214]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 00:27:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:27:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3321 writes, 14K keys, 3321 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3321 writes, 3321 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1288 writes, 5837 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.55 MB, 0.01 MB/s#012Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.6      0.15              0.06         7    0.021       0      0       0.0       0.0#012  L6      1/0    6.83 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    147.4    121.3      0.34              0.15         6    0.057     24K   3194       0.0       0.0#012 Sum      1/0    6.83 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    102.2    116.2      0.49              0.22        13    0.038     24K   3194       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    118.3    119.0      0.29              0.13         8    0.036     17K   2463       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    147.4    121.3      0.34              0.15         6    0.057     24K   3194       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.8      0.15              0.06         6    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 308.00 MB usage: 1.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(103,1.36 MB,0.440989%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,148.78 KB,0.0471734%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 00:27:54 np0005539482 python3[252366]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 00:27:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:27:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:27:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:01 np0005539482 podman[252434]: 2025-11-29 05:28:01.255674841 +0000 UTC m=+0.302721629 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 00:28:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:05 np0005539482 podman[252380]: 2025-11-29 05:28:05.019592784 +0000 UTC m=+10.184113521 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 00:28:05 np0005539482 podman[252491]: 2025-11-29 05:28:05.169427476 +0000 UTC m=+0.043768920 container create 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 00:28:05 np0005539482 podman[252491]: 2025-11-29 05:28:05.145524698 +0000 UTC m=+0.019866142 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 00:28:05 np0005539482 python3[252366]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 00:28:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:06 np0005539482 python3.9[252682]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:28:07 np0005539482 python3.9[252836]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 00:28:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:08 np0005539482 python3.9[252988]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 00:28:09 np0005539482 python3[253140]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 00:28:09 np0005539482 podman[253176]: 2025-11-29 05:28:09.574015535 +0000 UTC m=+0.076861439 container create 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 00:28:09 np0005539482 podman[253176]: 2025-11-29 05:28:09.528083115 +0000 UTC m=+0.030929069 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 00:28:09 np0005539482 python3[253140]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 00:28:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:10 np0005539482 podman[253240]: 2025-11-29 05:28:10.057446922 +0000 UTC m=+0.100898880 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 00:28:10 np0005539482 python3.9[253393]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:28:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:28:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:28:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:28:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:28:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:28:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:28:11 np0005539482 python3.9[253547]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:28:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:12 np0005539482 python3.9[253698]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394091.6576428-1489-181068609803635/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:28:12 np0005539482 python3.9[253774]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:28:12 np0005539482 systemd[1]: Reloading.
Nov 29 00:28:12 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:28:12 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:28:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:28:13.739 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:28:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:28:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:28:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:28:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:28:13 np0005539482 python3.9[253885]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 00:28:14 np0005539482 systemd[1]: Reloading.
Nov 29 00:28:14 np0005539482 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:28:14 np0005539482 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 00:28:14 np0005539482 systemd[1]: Starting nova_compute container...
Nov 29 00:28:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:14 np0005539482 podman[253924]: 2025-11-29 05:28:14.627512973 +0000 UTC m=+0.113672529 container init 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 29 00:28:14 np0005539482 podman[253924]: 2025-11-29 05:28:14.639492272 +0000 UTC m=+0.125651778 container start 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125)
Nov 29 00:28:14 np0005539482 podman[253924]: nova_compute
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + sudo -E kolla_set_configs
Nov 29 00:28:14 np0005539482 systemd[1]: Started nova_compute container.
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Validating config file
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying service configuration files
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Deleting /etc/ceph
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Creating directory /etc/ceph
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Writing out command to execute
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:14 np0005539482 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 00:28:14 np0005539482 nova_compute[253939]: ++ cat /run_command
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + CMD=nova-compute
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + ARGS=
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + sudo kolla_copy_cacerts
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + [[ ! -n '' ]]
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + . kolla_extend_start
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 00:28:14 np0005539482 nova_compute[253939]: Running command: 'nova-compute'
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + umask 0022
Nov 29 00:28:14 np0005539482 nova_compute[253939]: + exec nova-compute
Nov 29 00:28:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:15 np0005539482 python3.9[254100]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:28:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:16 np0005539482 python3.9[254251]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:28:16 np0005539482 nova_compute[253939]: 2025-11-29 05:28:16.884 253943 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 00:28:16 np0005539482 nova_compute[253939]: 2025-11-29 05:28:16.885 253943 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 00:28:16 np0005539482 nova_compute[253939]: 2025-11-29 05:28:16.885 253943 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 00:28:16 np0005539482 nova_compute[253939]: 2025-11-29 05:28:16.885 253943 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.022 253943 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.044 253943 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.045 253943 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 00:28:17 np0005539482 python3.9[254405]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 00:28:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.736 253943 INFO nova.virt.driver [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.880 253943 INFO nova.compute.provider_config [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.927 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.928 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.929 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.929 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.930 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.930 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.931 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.931 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.932 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.932 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.933 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.933 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.934 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.934 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.934 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.935 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.935 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.936 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.936 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.936 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.937 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.937 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.938 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.938 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.939 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.939 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.939 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.940 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.940 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.941 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.941 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.941 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.942 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.942 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.942 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.943 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.943 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.943 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.944 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.944 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.944 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.945 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.945 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.945 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.946 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.946 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.947 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.947 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.948 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.948 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.949 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.949 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.949 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.950 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.950 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.950 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.951 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.951 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.952 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.952 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.952 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.953 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.953 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.954 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.954 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.955 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.955 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.955 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.956 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.956 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.956 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.957 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.957 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.958 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.958 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.958 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.959 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.959 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.959 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.960 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.960 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.960 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.961 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.961 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.962 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.962 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.962 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.963 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.963 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.963 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.964 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.964 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.965 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.965 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.965 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.966 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.966 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.967 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.967 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.968 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.968 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.968 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.969 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.969 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.970 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.970 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.971 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.971 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.971 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.972 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.972 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.972 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.973 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.973 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.974 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.974 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.974 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.975 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.975 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.975 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.976 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.976 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.976 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.977 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.977 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.978 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.978 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.978 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.979 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.979 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.979 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.980 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.980 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.992 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.992 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.992 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:17 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.000 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.000 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.000 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.006 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.006 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.006 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 WARNING oslo_config.cfg [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 00:28:18 np0005539482 nova_compute[253939]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 00:28:18 np0005539482 nova_compute[253939]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 00:28:18 np0005539482 nova_compute[253939]: and ``live_migration_inbound_addr`` respectively.
Nov 29 00:28:18 np0005539482 nova_compute[253939]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_secret_uuid        = 93f82912-647c-5e78-b081-707d0a2966d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.087 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.088 253943 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.103 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.104 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.104 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.104 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 00:28:18 np0005539482 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 00:28:18 np0005539482 systemd[1]: Started libvirt QEMU daemon.
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.196 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fefaf0e3940> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.200 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fefaf0e3940> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.201 253943 INFO nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.224 253943 WARNING nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 00:28:18 np0005539482 nova_compute[253939]: 2025-11-29 05:28:18.224 253943 DEBUG nova.virt.libvirt.volume.mount [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 00:28:18 np0005539482 python3.9[254609]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 00:28:18 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:28:18 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.249 253943 INFO nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <host>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <uuid>60584de4-e080-4148-9fd9-37c7db79f006</uuid>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <arch>x86_64</arch>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model>EPYC-Rome-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <vendor>AMD</vendor>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <microcode version='16777317'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <signature family='23' model='49' stepping='0'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='x2apic'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='tsc-deadline'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='osxsave'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='hypervisor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='tsc_adjust'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='spec-ctrl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='stibp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='arch-capabilities'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='cmp_legacy'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='topoext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='virt-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='lbrv'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='tsc-scale'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='vmcb-clean'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='pause-filter'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='pfthreshold'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='svme-addr-chk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='rdctl-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='mds-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature name='pschange-mc-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <pages unit='KiB' size='4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <pages unit='KiB' size='2048'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <pages unit='KiB' size='1048576'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <power_management>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <suspend_mem/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </power_management>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <iommu support='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <migration_features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <live/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <uri_transports>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <uri_transport>tcp</uri_transport>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <uri_transport>rdma</uri_transport>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </uri_transports>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </migration_features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <topology>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <cells num='1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <cell id='0'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          <memory unit='KiB'>7864320</memory>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          <distances>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <sibling id='0' value='10'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          </distances>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          <cpus num='8'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:          </cpus>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        </cell>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </cells>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </topology>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <cache>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </cache>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <secmodel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model>selinux</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <doi>0</doi>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </secmodel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <secmodel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model>dac</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <doi>0</doi>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </secmodel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </host>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <guest>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <os_type>hvm</os_type>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <arch name='i686'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <wordsize>32</wordsize>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <domain type='qemu'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <domain type='kvm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </arch>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <pae/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <nonpae/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <acpi default='on' toggle='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <apic default='on' toggle='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <cpuselection/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <deviceboot/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <disksnapshot default='on' toggle='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <externalSnapshot/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </guest>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <guest>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <os_type>hvm</os_type>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <arch name='x86_64'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <wordsize>64</wordsize>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <domain type='qemu'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <domain type='kvm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </arch>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <acpi default='on' toggle='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <apic default='on' toggle='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <cpuselection/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <deviceboot/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <disksnapshot default='on' toggle='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <externalSnapshot/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </guest>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 
Nov 29 00:28:19 np0005539482 nova_compute[253939]: </capabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: #033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.260 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.296 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 00:28:19 np0005539482 nova_compute[253939]: <domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <domain>kvm</domain>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <arch>i686</arch>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <vcpu max='4096'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <iothreads supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <os supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='firmware'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <loader supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>rom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pflash</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='readonly'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>yes</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='secure'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </loader>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </os>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='maximumMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <vendor>AMD</vendor>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='succor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='custom' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-128'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-256'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-512'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <memoryBacking supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='sourceType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>anonymous</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>memfd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </memoryBacking>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <disk supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='diskDevice'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>disk</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cdrom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>floppy</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>lun</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>fdc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>sata</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </disk>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <graphics supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vnc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egl-headless</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </graphics>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <video supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='modelType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vga</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cirrus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>none</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>bochs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ramfb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </video>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hostdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='mode'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>subsystem</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='startupPolicy'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>mandatory</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>requisite</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>optional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='subsysType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pci</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='capsType'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='pciBackend'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hostdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <rng supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>random</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </rng>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <filesystem supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='driverType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>path</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>handle</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtiofs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </filesystem>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <tpm supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-tis</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-crb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emulator</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>external</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendVersion'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>2.0</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </tpm>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <redirdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </redirdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <channel supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </channel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <crypto supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </crypto>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <interface supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>passt</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </interface>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <panic supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>isa</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>hyperv</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </panic>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <console supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>null</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dev</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pipe</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stdio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>udp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tcp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu-vdagent</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </console>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <gic supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <genid supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backup supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <async-teardown supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <ps2 supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sev supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sgx supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hyperv supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='features'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>relaxed</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vapic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>spinlocks</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vpindex</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>runtime</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>synic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stimer</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reset</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vendor_id</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>frequencies</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reenlightenment</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tlbflush</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ipi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>avic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emsr_bitmap</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>xmm_input</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hyperv>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <launchSecurity supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='sectype'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tdx</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </launchSecurity>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: </domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.303 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 00:28:19 np0005539482 nova_compute[253939]: <domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <domain>kvm</domain>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <arch>i686</arch>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <vcpu max='240'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <iothreads supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <os supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='firmware'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <loader supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>rom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pflash</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='readonly'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>yes</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='secure'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </loader>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </os>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 podman[254691]: 2025-11-29 05:28:19.362003528 +0000 UTC m=+0.087129637 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='maximumMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <vendor>AMD</vendor>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='succor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='custom' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-128'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-256'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-512'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <memoryBacking supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='sourceType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>anonymous</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>memfd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </memoryBacking>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <disk supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='diskDevice'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>disk</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cdrom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>floppy</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>lun</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ide</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>fdc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>sata</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </disk>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <graphics supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vnc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egl-headless</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </graphics>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <video supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='modelType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vga</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cirrus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>none</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>bochs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ramfb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </video>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hostdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='mode'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>subsystem</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='startupPolicy'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>mandatory</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>requisite</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>optional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='subsysType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pci</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='capsType'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='pciBackend'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hostdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <rng supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>random</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </rng>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <filesystem supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='driverType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>path</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>handle</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtiofs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </filesystem>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <tpm supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-tis</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-crb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emulator</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>external</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendVersion'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>2.0</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </tpm>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <redirdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </redirdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <channel supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </channel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <crypto supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </crypto>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <interface supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>passt</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </interface>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <panic supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>isa</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>hyperv</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </panic>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <console supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>null</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dev</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pipe</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stdio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>udp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tcp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu-vdagent</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </console>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <gic supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <genid supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backup supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <async-teardown supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <ps2 supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sev supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sgx supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hyperv supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='features'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>relaxed</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vapic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>spinlocks</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vpindex</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>runtime</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>synic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stimer</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reset</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vendor_id</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>frequencies</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reenlightenment</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tlbflush</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ipi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>avic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emsr_bitmap</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>xmm_input</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hyperv>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <launchSecurity supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='sectype'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tdx</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </launchSecurity>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: </domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.349 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.356 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 00:28:19 np0005539482 nova_compute[253939]: <domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <domain>kvm</domain>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <arch>x86_64</arch>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <vcpu max='4096'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <iothreads supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <os supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='firmware'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>efi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <loader supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>rom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pflash</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='readonly'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>yes</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='secure'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>yes</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </loader>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </os>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='maximumMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <vendor>AMD</vendor>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='succor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='custom' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-128'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-256'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-512'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <memoryBacking supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='sourceType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>anonymous</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>memfd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </memoryBacking>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <disk supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='diskDevice'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>disk</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cdrom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>floppy</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>lun</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>fdc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>sata</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </disk>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <graphics supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vnc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egl-headless</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </graphics>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <video supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='modelType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vga</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cirrus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>none</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>bochs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ramfb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </video>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hostdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='mode'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>subsystem</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='startupPolicy'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>mandatory</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>requisite</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>optional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='subsysType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pci</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='capsType'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='pciBackend'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hostdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <rng supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>random</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </rng>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <filesystem supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='driverType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>path</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>handle</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtiofs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </filesystem>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <tpm supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-tis</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-crb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emulator</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>external</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendVersion'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>2.0</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </tpm>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <redirdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </redirdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <channel supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </channel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <crypto supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </crypto>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <interface supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>passt</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </interface>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <panic supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>isa</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>hyperv</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </panic>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <console supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>null</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dev</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pipe</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stdio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>udp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tcp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu-vdagent</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </console>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <gic supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <genid supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backup supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <async-teardown supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <ps2 supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sev supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sgx supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hyperv supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='features'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>relaxed</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vapic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>spinlocks</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vpindex</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>runtime</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>synic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stimer</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reset</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vendor_id</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>frequencies</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reenlightenment</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tlbflush</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ipi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>avic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emsr_bitmap</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>xmm_input</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hyperv>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <launchSecurity supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='sectype'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tdx</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </launchSecurity>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: </domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.414 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 00:28:19 np0005539482 nova_compute[253939]: <domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <domain>kvm</domain>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <arch>x86_64</arch>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <vcpu max='240'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <iothreads supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <os supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='firmware'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <loader supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>rom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pflash</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='readonly'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>yes</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='secure'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>no</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </loader>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </os>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='maximumMigratable'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>on</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>off</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <vendor>AMD</vendor>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='succor'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <mode name='custom' supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Denverton-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='auto-ibrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amd-psfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='stibp-always-on'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='EPYC-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-128'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-256'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx10-512'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='prefetchiti'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Haswell-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512er'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512pf'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fma4'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tbm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xop'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='amx-tile'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-bf16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-fp16'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bitalg'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrc'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fzrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='la57'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='taa-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xfd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ifma'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cmpccxadd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fbsdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='fsrs'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ibrs-all'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mcdt-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pbrsb-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='psdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='serialize'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vaes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='hle'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='rtm'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512bw'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512cd'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512dq'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512f'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='avx512vl'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='invpcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pcid'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='pku'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='mpx'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='core-capability'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='split-lock-detect'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='cldemote'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='erms'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='gfni'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdir64b'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='movdiri'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='xsaves'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='athlon-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='core2duo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='coreduo-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='n270-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='ss'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <blockers model='phenom-v1'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnow'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <feature name='3dnowext'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </blockers>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </mode>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </cpu>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <memoryBacking supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <enum name='sourceType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>anonymous</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <value>memfd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </memoryBacking>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <disk supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='diskDevice'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>disk</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cdrom</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>floppy</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>lun</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ide</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>fdc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>sata</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </disk>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <graphics supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vnc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egl-headless</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </graphics>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <video supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='modelType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vga</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>cirrus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>none</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>bochs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ramfb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </video>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hostdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='mode'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>subsystem</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='startupPolicy'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>mandatory</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>requisite</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>optional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='subsysType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pci</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>scsi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='capsType'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='pciBackend'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hostdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <rng supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtio-non-transitional</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>random</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>egd</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </rng>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <filesystem supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='driverType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>path</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>handle</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>virtiofs</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </filesystem>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <tpm supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-tis</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tpm-crb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emulator</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>external</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendVersion'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>2.0</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </tpm>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <redirdev supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='bus'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>usb</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </redirdev>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <channel supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </channel>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <crypto supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendModel'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>builtin</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </crypto>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <interface supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='backendType'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>default</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>passt</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </interface>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <panic supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='model'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>isa</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>hyperv</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </panic>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <console supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='type'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>null</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vc</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pty</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dev</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>file</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>pipe</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stdio</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>udp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tcp</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>unix</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>qemu-vdagent</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>dbus</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </console>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </devices>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  <features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <gic supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <genid supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <backup supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <async-teardown supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <ps2 supported='yes'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sev supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <sgx supported='no'/>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <hyperv supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='features'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>relaxed</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vapic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>spinlocks</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vpindex</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>runtime</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>synic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>stimer</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reset</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>vendor_id</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>frequencies</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>reenlightenment</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tlbflush</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>ipi</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>avic</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>emsr_bitmap</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>xmm_input</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </defaults>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </hyperv>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    <launchSecurity supported='yes'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      <enum name='sectype'>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:        <value>tdx</value>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:      </enum>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:    </launchSecurity>
Nov 29 00:28:19 np0005539482 nova_compute[253939]:  </features>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: </domainCapabilities>
Nov 29 00:28:19 np0005539482 nova_compute[253939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.473 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.474 253943 INFO nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Secure Boot support detected#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.476 253943 INFO nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.477 253943 INFO nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.493 253943 DEBUG nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.555 253943 INFO nova.virt.node [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Determined node identity 59594bc8-0143-475b-913f-cbe106b48966 from /var/lib/nova/compute_id#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.589 253943 WARNING nova.compute.manager [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Compute nodes ['59594bc8-0143-475b-913f-cbe106b48966'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.632 253943 INFO nova.compute.manager [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 WARNING nova.compute.manager [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 DEBUG oslo_concurrency.lockutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 DEBUG oslo_concurrency.lockutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 DEBUG oslo_concurrency.lockutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.670 253943 DEBUG nova.compute.resource_tracker [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:28:19 np0005539482 nova_compute[253939]: 2025-11-29 05:28:19.670 253943 DEBUG oslo_concurrency.processutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:28:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:19 np0005539482 python3.9[254814]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:28:20 np0005539482 systemd[1]: Stopping nova_compute container...
Nov 29 00:28:20 np0005539482 nova_compute[253939]: 2025-11-29 05:28:20.103 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 00:28:20 np0005539482 nova_compute[253939]: 2025-11-29 05:28:20.103 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 00:28:20 np0005539482 nova_compute[253939]: 2025-11-29 05:28:20.103 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 00:28:20 np0005539482 virtqemud[254503]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 00:28:20 np0005539482 virtqemud[254503]: hostname: compute-0
Nov 29 00:28:20 np0005539482 virtqemud[254503]: End of file while reading data: Input/output error
Nov 29 00:28:20 np0005539482 systemd[1]: libpod-6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016.scope: Deactivated successfully.
Nov 29 00:28:20 np0005539482 systemd[1]: libpod-6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016.scope: Consumed 3.589s CPU time.
Nov 29 00:28:20 np0005539482 podman[254838]: 2025-11-29 05:28:20.478535641 +0000 UTC m=+0.440564732 container died 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:28:20 np0005539482 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016-userdata-shm.mount: Deactivated successfully.
Nov 29 00:28:20 np0005539482 systemd[1]: var-lib-containers-storage-overlay-dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2-merged.mount: Deactivated successfully.
Nov 29 00:28:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:22 np0005539482 podman[254838]: 2025-11-29 05:28:22.408832305 +0000 UTC m=+2.370861396 container cleanup 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:28:22 np0005539482 podman[254838]: nova_compute
Nov 29 00:28:22 np0005539482 podman[254868]: nova_compute
Nov 29 00:28:22 np0005539482 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 00:28:22 np0005539482 systemd[1]: Stopped nova_compute container.
Nov 29 00:28:22 np0005539482 systemd[1]: Starting nova_compute container...
Nov 29 00:28:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:22 np0005539482 podman[254882]: 2025-11-29 05:28:22.67914701 +0000 UTC m=+0.133575850 container init 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:28:22 np0005539482 podman[254882]: 2025-11-29 05:28:22.689215834 +0000 UTC m=+0.143644634 container start 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute)
Nov 29 00:28:22 np0005539482 podman[254882]: nova_compute
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + sudo -E kolla_set_configs
Nov 29 00:28:22 np0005539482 systemd[1]: Started nova_compute container.
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Validating config file
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying service configuration files
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /etc/ceph
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Creating directory /etc/ceph
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Writing out command to execute
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:22 np0005539482 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 00:28:22 np0005539482 nova_compute[254898]: ++ cat /run_command
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + CMD=nova-compute
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + ARGS=
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + sudo kolla_copy_cacerts
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + [[ ! -n '' ]]
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + . kolla_extend_start
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 00:28:22 np0005539482 nova_compute[254898]: Running command: 'nova-compute'
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + umask 0022
Nov 29 00:28:22 np0005539482 nova_compute[254898]: + exec nova-compute
Nov 29 00:28:23 np0005539482 python3.9[255061]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 00:28:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:23 np0005539482 systemd[1]: Started libpod-conmon-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd.scope.
Nov 29 00:28:23 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:23 np0005539482 podman[255087]: 2025-11-29 05:28:23.998005453 +0000 UTC m=+0.153215485 container init 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Nov 29 00:28:24 np0005539482 podman[255087]: 2025-11-29 05:28:24.006772776 +0000 UTC m=+0.161982768 container start 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init)
Nov 29 00:28:24 np0005539482 python3.9[255061]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 00:28:24 np0005539482 nova_compute_init[255109]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 00:28:24 np0005539482 systemd[1]: libpod-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd.scope: Deactivated successfully.
Nov 29 00:28:24 np0005539482 podman[255124]: 2025-11-29 05:28:24.140530849 +0000 UTC m=+0.030783015 container died 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init)
Nov 29 00:28:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd-userdata-shm.mount: Deactivated successfully.
Nov 29 00:28:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270-merged.mount: Deactivated successfully.
Nov 29 00:28:24 np0005539482 podman[255124]: 2025-11-29 05:28:24.179950322 +0000 UTC m=+0.070202458 container cleanup 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3)
Nov 29 00:28:24 np0005539482 systemd[1]: libpod-conmon-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd.scope: Deactivated successfully.
Nov 29 00:28:24 np0005539482 nova_compute[254898]: 2025-11-29 05:28:24.788 254902 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 00:28:24 np0005539482 nova_compute[254898]: 2025-11-29 05:28:24.789 254902 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 00:28:24 np0005539482 nova_compute[254898]: 2025-11-29 05:28:24.789 254902 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 00:28:24 np0005539482 nova_compute[254898]: 2025-11-29 05:28:24.789 254902 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 00:28:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:24 np0005539482 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 00:28:24 np0005539482 systemd[1]: session-49.scope: Consumed 2min 36.249s CPU time.
Nov 29 00:28:24 np0005539482 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Nov 29 00:28:24 np0005539482 systemd-logind[793]: Removed session 49.
Nov 29 00:28:24 np0005539482 nova_compute[254898]: 2025-11-29 05:28:24.918 254902 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:28:24 np0005539482 nova_compute[254898]: 2025-11-29 05:28:24.944 254902 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:28:24 np0005539482 nova_compute[254898]: 2025-11-29 05:28:24.944 254902 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.415 254902 INFO nova.virt.driver [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.539 254902 INFO nova.compute.provider_config [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.555 254902 DEBUG oslo_concurrency.lockutils [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.555 254902 DEBUG oslo_concurrency.lockutils [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.555 254902 DEBUG oslo_concurrency.lockutils [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 WARNING oslo_config.cfg [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 00:28:25 np0005539482 nova_compute[254898]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 00:28:25 np0005539482 nova_compute[254898]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 00:28:25 np0005539482 nova_compute[254898]: and ``live_migration_inbound_addr`` respectively.
Nov 29 00:28:25 np0005539482 nova_compute[254898]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_secret_uuid        = 93f82912-647c-5e78-b081-707d0a2966d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.694 254902 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.721 254902 INFO nova.virt.node [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Determined node identity 59594bc8-0143-475b-913f-cbe106b48966 from /var/lib/nova/compute_id#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.721 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.722 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.723 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.723 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 00:28:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.735 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7feb889764c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.738 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7feb889764c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.740 254902 INFO nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.746 254902 INFO nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <host>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <uuid>60584de4-e080-4148-9fd9-37c7db79f006</uuid>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <arch>x86_64</arch>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model>EPYC-Rome-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <vendor>AMD</vendor>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <microcode version='16777317'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <signature family='23' model='49' stepping='0'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='x2apic'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='tsc-deadline'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='osxsave'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='hypervisor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='tsc_adjust'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='spec-ctrl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='stibp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='arch-capabilities'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='cmp_legacy'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='topoext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='virt-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='lbrv'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='tsc-scale'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='vmcb-clean'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='pause-filter'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='pfthreshold'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='svme-addr-chk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='rdctl-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='mds-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature name='pschange-mc-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <pages unit='KiB' size='4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <pages unit='KiB' size='2048'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <pages unit='KiB' size='1048576'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <power_management>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <suspend_mem/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </power_management>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <iommu support='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <migration_features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <live/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <uri_transports>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <uri_transport>tcp</uri_transport>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <uri_transport>rdma</uri_transport>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </uri_transports>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </migration_features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <topology>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <cells num='1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <cell id='0'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          <memory unit='KiB'>7864320</memory>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          <distances>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <sibling id='0' value='10'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          </distances>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          <cpus num='8'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:          </cpus>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        </cell>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </cells>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </topology>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <cache>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </cache>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <secmodel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model>selinux</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <doi>0</doi>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </secmodel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <secmodel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model>dac</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <doi>0</doi>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </secmodel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </host>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <guest>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <os_type>hvm</os_type>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <arch name='i686'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <wordsize>32</wordsize>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <domain type='qemu'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <domain type='kvm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </arch>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <pae/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <nonpae/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <acpi default='on' toggle='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <apic default='on' toggle='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <cpuselection/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <deviceboot/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <disksnapshot default='on' toggle='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <externalSnapshot/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </guest>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <guest>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <os_type>hvm</os_type>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <arch name='x86_64'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <wordsize>64</wordsize>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <domain type='qemu'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <domain type='kvm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </arch>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <acpi default='on' toggle='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <apic default='on' toggle='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <cpuselection/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <deviceboot/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <disksnapshot default='on' toggle='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <externalSnapshot/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </guest>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 
Nov 29 00:28:25 np0005539482 nova_compute[254898]: </capabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: #033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.752 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.754 254902 DEBUG nova.virt.libvirt.volume.mount [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.757 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 00:28:25 np0005539482 nova_compute[254898]: <domainCapabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <domain>kvm</domain>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <arch>i686</arch>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <vcpu max='240'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <iothreads supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <os supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <enum name='firmware'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <loader supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>rom</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pflash</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='readonly'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>yes</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='secure'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </loader>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </os>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='maximumMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <vendor>AMD</vendor>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='succor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='custom' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-128'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-256'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-512'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SierraForest'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='athlon'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='athlon-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='core2duo'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='core2duo-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='coreduo'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='coreduo-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='n270'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='n270-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='phenom'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='phenom-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <memoryBacking supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <enum name='sourceType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>file</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>anonymous</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>memfd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </memoryBacking>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <devices>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <disk supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='diskDevice'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>disk</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>cdrom</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>floppy</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>lun</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ide</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>fdc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>sata</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </disk>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <graphics supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vnc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>egl-headless</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </graphics>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <video supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='modelType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vga</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>cirrus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>none</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>bochs</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ramfb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </video>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <hostdev supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='mode'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>subsystem</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='startupPolicy'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>mandatory</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>requisite</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>optional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='subsysType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pci</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='capsType'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='pciBackend'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </hostdev>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <rng supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>random</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>egd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </rng>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <filesystem supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='driverType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>path</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>handle</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtiofs</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </filesystem>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <tpm supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tpm-tis</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tpm-crb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>emulator</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>external</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendVersion'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>2.0</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </tpm>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <redirdev supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </redirdev>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <channel supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </channel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <crypto supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>qemu</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </crypto>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <interface supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>passt</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </interface>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <panic supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>isa</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>hyperv</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </panic>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <console supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>null</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dev</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>file</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pipe</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>stdio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>udp</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tcp</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>qemu-vdagent</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </console>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </devices>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <gic supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <genid supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <backup supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <async-teardown supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <ps2 supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <sev supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <sgx supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <hyperv supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='features'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>relaxed</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vapic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>spinlocks</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vpindex</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>runtime</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>synic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>stimer</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>reset</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vendor_id</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>frequencies</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>reenlightenment</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tlbflush</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ipi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>avic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>emsr_bitmap</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>xmm_input</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <defaults>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </defaults>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </hyperv>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <launchSecurity supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='sectype'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tdx</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </launchSecurity>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: </domainCapabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.765 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 00:28:25 np0005539482 nova_compute[254898]: <domainCapabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <domain>kvm</domain>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <arch>i686</arch>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <vcpu max='4096'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <iothreads supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <os supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <enum name='firmware'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <loader supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>rom</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pflash</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='readonly'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>yes</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='secure'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </loader>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </os>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='maximumMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <vendor>AMD</vendor>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='succor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='custom' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-128'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-256'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-512'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SierraForest'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='athlon'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='athlon-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='core2duo'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='core2duo-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='coreduo'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='coreduo-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='n270'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='n270-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='phenom'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='phenom-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <memoryBacking supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <enum name='sourceType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>file</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>anonymous</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>memfd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </memoryBacking>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <devices>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <disk supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='diskDevice'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>disk</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>cdrom</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>floppy</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>lun</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>fdc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>sata</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </disk>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <graphics supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vnc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>egl-headless</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </graphics>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <video supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='modelType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vga</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>cirrus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>none</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>bochs</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ramfb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </video>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <hostdev supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='mode'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>subsystem</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='startupPolicy'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>mandatory</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>requisite</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>optional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='subsysType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pci</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='capsType'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='pciBackend'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </hostdev>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <rng supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>random</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>egd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </rng>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <filesystem supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='driverType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>path</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>handle</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtiofs</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </filesystem>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <tpm supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tpm-tis</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tpm-crb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>emulator</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>external</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendVersion'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>2.0</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </tpm>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <redirdev supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </redirdev>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <channel supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </channel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <crypto supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>qemu</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </crypto>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <interface supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>passt</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </interface>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <panic supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>isa</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>hyperv</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </panic>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <console supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>null</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dev</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>file</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pipe</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>stdio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>udp</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tcp</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>qemu-vdagent</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </console>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </devices>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <gic supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <genid supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <backup supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <async-teardown supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <ps2 supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <sev supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <sgx supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <hyperv supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='features'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>relaxed</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vapic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>spinlocks</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vpindex</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>runtime</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>synic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>stimer</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>reset</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vendor_id</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>frequencies</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>reenlightenment</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tlbflush</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ipi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>avic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>emsr_bitmap</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>xmm_input</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <defaults>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </defaults>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </hyperv>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <launchSecurity supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='sectype'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tdx</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </launchSecurity>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: </domainCapabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.803 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.808 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 00:28:25 np0005539482 nova_compute[254898]: <domainCapabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <domain>kvm</domain>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <arch>x86_64</arch>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <vcpu max='240'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <iothreads supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <os supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <enum name='firmware'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <loader supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>rom</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pflash</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='readonly'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>yes</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='secure'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </loader>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </os>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='maximumMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <vendor>AMD</vendor>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='succor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='custom' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-128'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-256'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-512'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SierraForest'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='athlon'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='athlon-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='core2duo'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='core2duo-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='coreduo'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='coreduo-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='n270'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='n270-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='phenom'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='phenom-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <memoryBacking supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <enum name='sourceType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>file</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>anonymous</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>memfd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </memoryBacking>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <devices>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <disk supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='diskDevice'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>disk</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>cdrom</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>floppy</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>lun</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ide</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>fdc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>sata</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </disk>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <graphics supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vnc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>egl-headless</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </graphics>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <video supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='modelType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vga</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>cirrus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>none</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>bochs</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ramfb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </video>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <hostdev supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='mode'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>subsystem</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='startupPolicy'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>mandatory</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>requisite</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>optional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='subsysType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pci</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='capsType'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='pciBackend'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </hostdev>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <rng supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>random</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>egd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </rng>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <filesystem supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='driverType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>path</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>handle</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>virtiofs</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </filesystem>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <tpm supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tpm-tis</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tpm-crb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>emulator</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>external</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendVersion'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>2.0</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </tpm>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <redirdev supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </redirdev>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <channel supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </channel>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <crypto supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>qemu</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </crypto>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <interface supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='backendType'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>passt</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </interface>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <panic supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>isa</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>hyperv</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </panic>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <console supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>null</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vc</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dev</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>file</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pipe</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>stdio</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>udp</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tcp</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>qemu-vdagent</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </console>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </devices>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <gic supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <genid supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <backup supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <async-teardown supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <ps2 supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <sev supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <sgx supported='no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <hyperv supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='features'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>relaxed</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vapic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>spinlocks</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vpindex</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>runtime</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>synic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>stimer</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>reset</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>vendor_id</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>frequencies</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>reenlightenment</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tlbflush</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>ipi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>avic</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>emsr_bitmap</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>xmm_input</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <defaults>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </defaults>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </hyperv>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <launchSecurity supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='sectype'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>tdx</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </launchSecurity>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </features>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: </domainCapabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:25 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.871 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 00:28:25 np0005539482 nova_compute[254898]: <domainCapabilities>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <domain>kvm</domain>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <arch>x86_64</arch>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <vcpu max='4096'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <iothreads supported='yes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <os supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <enum name='firmware'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>efi</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <loader supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>rom</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>pflash</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='readonly'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>yes</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='secure'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>yes</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>no</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </loader>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  </os>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:  <cpu>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-passthrough' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='hostPassthroughMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='maximum' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <enum name='maximumMigratable'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>on</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <value>off</value>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='host-model' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <vendor>AMD</vendor>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='x2apic'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='hypervisor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='stibp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='overflow-recov'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='succor'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lbrv'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='tsc-scale'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='flushbyasid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pause-filter'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='pfthreshold'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <feature policy='disable' name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:    <mode name='custom' supported='yes'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Broadwell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Cooperlake-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Denverton-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Dhyana-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='auto-ibrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Milan-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amd-psfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='no-nested-data-bp'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='null-sel-clr-base'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='stibp-always-on'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-Rome-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='EPYC-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='GraniteRapids-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-128'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-256'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx10-512'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='prefetchiti'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Haswell-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v3'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v6'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Icelake-Server-v7'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-IBRS'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='IvyBridge-v2'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='KnightsMill-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4fmaps'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-4vnniw'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512er'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='avx512pf'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G4-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:      <blockers model='Opteron_G5-v1'>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='fma4'/>
Nov 29 00:28:25 np0005539482 nova_compute[254898]:        <feature name='tbm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xop'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v2'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='SapphireRapids-v3'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-int8'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='amx-tile'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-bf16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-fp16'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512-vpopcntdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bitalg'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512ifma'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vbmi2'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrc'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fzrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='la57'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='taa-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='tsx-ldtrk'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xfd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='SierraForest'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='SierraForest-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-ifma'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-ne-convert'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx-vnni-int8'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='bus-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cmpccxadd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fbsdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='fsrs'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ibrs-all'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='mcdt-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pbrsb-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='psdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='sbdr-ssdp-no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='serialize'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vaes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='vpclmulqdq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v2'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v3'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Client-v4'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v2'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='hle'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='rtm'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v3'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v4'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Skylake-Server-v5'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512bw'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512cd'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512dq'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512f'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='avx512vl'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='invpcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pcid'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='pku'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Snowridge'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='mpx'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v2'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v3'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='core-capability'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='split-lock-detect'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='Snowridge-v4'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='cldemote'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='erms'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='gfni'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdir64b'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='movdiri'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='xsaves'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='athlon'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='athlon-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='core2duo'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='core2duo-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='coreduo'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='coreduo-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='n270'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='n270-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='ss'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='phenom'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <blockers model='phenom-v1'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnow'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <feature name='3dnowext'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </blockers>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </mode>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:  </cpu>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:  <memoryBacking supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <enum name='sourceType'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <value>file</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <value>anonymous</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <value>memfd</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:  </memoryBacking>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:  <devices>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <disk supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='diskDevice'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>disk</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>cdrom</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>floppy</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>lun</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>fdc</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>sata</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </disk>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <graphics supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>vnc</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>egl-headless</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </graphics>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <video supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='modelType'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>vga</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>cirrus</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>none</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>bochs</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>ramfb</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </video>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <hostdev supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='mode'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>subsystem</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='startupPolicy'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>mandatory</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>requisite</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>optional</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='subsysType'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>pci</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>scsi</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='capsType'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='pciBackend'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </hostdev>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <rng supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio-transitional</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtio-non-transitional</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>random</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>egd</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </rng>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <filesystem supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='driverType'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>path</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>handle</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>virtiofs</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </filesystem>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <tpm supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>tpm-tis</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>tpm-crb</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>emulator</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>external</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='backendVersion'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>2.0</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </tpm>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <redirdev supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='bus'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>usb</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </redirdev>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <channel supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </channel>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <crypto supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='model'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>qemu</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='backendModel'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>builtin</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </crypto>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <interface supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='backendType'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>default</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>passt</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </interface>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <panic supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='model'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>isa</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>hyperv</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </panic>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <console supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='type'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>null</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>vc</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>pty</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>dev</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>file</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>pipe</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>stdio</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>udp</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>tcp</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>unix</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>qemu-vdagent</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>dbus</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </console>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:  </devices>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:  <features>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <gic supported='no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <vmcoreinfo supported='yes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <genid supported='yes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <backingStoreInput supported='yes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <backup supported='yes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <async-teardown supported='yes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <ps2 supported='yes'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <sev supported='no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <sgx supported='no'/>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <hyperv supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='features'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>relaxed</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>vapic</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>spinlocks</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>vpindex</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>runtime</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>synic</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>stimer</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>reset</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>vendor_id</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>frequencies</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>reenlightenment</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>tlbflush</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>ipi</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>avic</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>emsr_bitmap</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>xmm_input</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <defaults>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <spinlocks>4095</spinlocks>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <stimer_direct>on</stimer_direct>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </defaults>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </hyperv>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    <launchSecurity supported='yes'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      <enum name='sectype'>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:        <value>tdx</value>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:      </enum>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:    </launchSecurity>
Nov 29 00:28:26 np0005539482 nova_compute[254898]:  </features>
Nov 29 00:28:26 np0005539482 nova_compute[254898]: </domainCapabilities>
Nov 29 00:28:26 np0005539482 nova_compute[254898]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 INFO nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Secure Boot support detected#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.941 254902 INFO nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.942 254902 INFO nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.951 254902 DEBUG nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:25.979 254902 INFO nova.virt.node [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Determined node identity 59594bc8-0143-475b-913f-cbe106b48966 from /var/lib/nova/compute_id#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.002 254902 WARNING nova.compute.manager [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Compute nodes ['59594bc8-0143-475b-913f-cbe106b48966'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.038 254902 INFO nova.compute.manager [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.067 254902 WARNING nova.compute.manager [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.068 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.068 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.068 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.069 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.069 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:28:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:28:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129658402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.480 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:28:26 np0005539482 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 00:28:26 np0005539482 systemd[1]: Started libvirt nodedev daemon.
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.869 254902 WARNING nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.871 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.871 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.871 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.903 254902 WARNING nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] No compute node record for compute-0.ctlplane.example.com:59594bc8-0143-475b-913f-cbe106b48966: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 59594bc8-0143-475b-913f-cbe106b48966 could not be found.#033[00m
Nov 29 00:28:26 np0005539482 nova_compute[254898]: 2025-11-29 05:28:26.928 254902 INFO nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 59594bc8-0143-475b-913f-cbe106b48966#033[00m
Nov 29 00:28:27 np0005539482 nova_compute[254898]: 2025-11-29 05:28:27.002 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:28:27 np0005539482 nova_compute[254898]: 2025-11-29 05:28:27.003 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:28:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:27 np0005539482 nova_compute[254898]: 2025-11-29 05:28:27.983 254902 INFO nova.scheduler.client.report [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] [req-7da06b63-3af5-41bd-b235-19aadffc157d] Created resource provider record via placement API for resource provider with UUID 59594bc8-0143-475b-913f-cbe106b48966 and name compute-0.ctlplane.example.com.#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.397 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:28:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:28:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2587030157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.834 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.838 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 00:28:28 np0005539482 nova_compute[254898]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.838 254902 INFO nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.839 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.839 254902 DEBUG nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.942 254902 DEBUG nova.scheduler.client.report [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updated inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.943 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating resource provider 59594bc8-0143-475b-913f-cbe106b48966 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 00:28:28 np0005539482 nova_compute[254898]: 2025-11-29 05:28:28.943 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 00:28:29 np0005539482 nova_compute[254898]: 2025-11-29 05:28:29.094 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating resource provider 59594bc8-0143-475b-913f-cbe106b48966 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 00:28:29 np0005539482 nova_compute[254898]: 2025-11-29 05:28:29.132 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:28:29 np0005539482 nova_compute[254898]: 2025-11-29 05:28:29.133 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:28:29 np0005539482 nova_compute[254898]: 2025-11-29 05:28:29.133 254902 DEBUG nova.service [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 29 00:28:29 np0005539482 nova_compute[254898]: 2025-11-29 05:28:29.249 254902 DEBUG nova.service [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 29 00:28:29 np0005539482 nova_compute[254898]: 2025-11-29 05:28:29.250 254902 DEBUG nova.servicegroup.drivers.db [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 29 00:28:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:35 np0005539482 nova_compute[254898]: 2025-11-29 05:28:35.251 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:28:35 np0005539482 nova_compute[254898]: 2025-11-29 05:28:35.278 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:28:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:36 np0005539482 podman[255269]: 2025-11-29 05:28:36.069596073 +0000 UTC m=+0.114539000 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 00:28:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:41 np0005539482 podman[255291]: 2025-11-29 05:28:41.027609732 +0000 UTC m=+0.077159436 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:28:41
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:28:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:49 np0005539482 podman[255417]: 2025-11-29 05:28:49.465204838 +0000 UTC m=+0.064405108 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 00:28:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:28:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 09b80710-6c2b-43d5-a118-a3a9df355a4a does not exist
Nov 29 00:28:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 2407b23f-abc0-4daf-a606-d322671ac326 does not exist
Nov 29 00:28:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev cb4e024d-82dd-47d5-bcc3-6b5fe1660fb8 does not exist
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:28:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:28:50 np0005539482 podman[255610]: 2025-11-29 05:28:50.540158215 +0000 UTC m=+0.063602860 container create 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 00:28:50 np0005539482 systemd[1]: Started libpod-conmon-959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696.scope.
Nov 29 00:28:50 np0005539482 podman[255610]: 2025-11-29 05:28:50.507198508 +0000 UTC m=+0.030643213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:28:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:50 np0005539482 podman[255610]: 2025-11-29 05:28:50.633466096 +0000 UTC m=+0.156910731 container init 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 00:28:50 np0005539482 podman[255610]: 2025-11-29 05:28:50.643214182 +0000 UTC m=+0.166658787 container start 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:28:50 np0005539482 podman[255610]: 2025-11-29 05:28:50.646435917 +0000 UTC m=+0.169880632 container attach 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:28:50 np0005539482 amazing_galois[255627]: 167 167
Nov 29 00:28:50 np0005539482 systemd[1]: libpod-959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696.scope: Deactivated successfully.
Nov 29 00:28:50 np0005539482 podman[255610]: 2025-11-29 05:28:50.651116846 +0000 UTC m=+0.174561501 container died 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:28:50 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4582dbd73a6820c9abe75fdd30115dded2835d37d959a629f85c2e6399618c78-merged.mount: Deactivated successfully.
Nov 29 00:28:50 np0005539482 podman[255610]: 2025-11-29 05:28:50.689816076 +0000 UTC m=+0.213260691 container remove 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:28:50 np0005539482 systemd[1]: libpod-conmon-959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696.scope: Deactivated successfully.
Nov 29 00:28:50 np0005539482 podman[255649]: 2025-11-29 05:28:50.885004056 +0000 UTC m=+0.053342262 container create 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:28:50 np0005539482 systemd[1]: Started libpod-conmon-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope.
Nov 29 00:28:50 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:28:50 np0005539482 podman[255649]: 2025-11-29 05:28:50.863612259 +0000 UTC m=+0.031950485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:28:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:50 np0005539482 podman[255649]: 2025-11-29 05:28:50.988747029 +0000 UTC m=+0.157085305 container init 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:28:51 np0005539482 podman[255649]: 2025-11-29 05:28:51.000587475 +0000 UTC m=+0.168925691 container start 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:28:51 np0005539482 podman[255649]: 2025-11-29 05:28:51.003902212 +0000 UTC m=+0.172240528 container attach 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146402093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146402093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/750604550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/750604550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:28:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478911818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:28:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478911818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:28:52 np0005539482 pedantic_noether[255665]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:28:52 np0005539482 pedantic_noether[255665]: --> relative data size: 1.0
Nov 29 00:28:52 np0005539482 pedantic_noether[255665]: --> All data devices are unavailable
Nov 29 00:28:52 np0005539482 systemd[1]: libpod-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope: Deactivated successfully.
Nov 29 00:28:52 np0005539482 podman[255649]: 2025-11-29 05:28:52.173898066 +0000 UTC m=+1.342236272 container died 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:28:52 np0005539482 systemd[1]: libpod-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope: Consumed 1.074s CPU time.
Nov 29 00:28:52 np0005539482 systemd[1]: var-lib-containers-storage-overlay-02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b-merged.mount: Deactivated successfully.
Nov 29 00:28:52 np0005539482 podman[255649]: 2025-11-29 05:28:52.227386081 +0000 UTC m=+1.395724287 container remove 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:28:52 np0005539482 systemd[1]: libpod-conmon-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope: Deactivated successfully.
Nov 29 00:28:52 np0005539482 podman[255845]: 2025-11-29 05:28:52.902640107 +0000 UTC m=+0.062620107 container create ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:28:52 np0005539482 systemd[1]: Started libpod-conmon-ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3.scope.
Nov 29 00:28:52 np0005539482 podman[255845]: 2025-11-29 05:28:52.873785107 +0000 UTC m=+0.033765197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:28:52 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:53 np0005539482 podman[255845]: 2025-11-29 05:28:53.001006676 +0000 UTC m=+0.160986726 container init ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:28:53 np0005539482 podman[255845]: 2025-11-29 05:28:53.013331372 +0000 UTC m=+0.173311382 container start ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:28:53 np0005539482 podman[255845]: 2025-11-29 05:28:53.017143491 +0000 UTC m=+0.177123491 container attach ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:28:53 np0005539482 strange_chaum[255861]: 167 167
Nov 29 00:28:53 np0005539482 systemd[1]: libpod-ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3.scope: Deactivated successfully.
Nov 29 00:28:53 np0005539482 podman[255845]: 2025-11-29 05:28:53.022403473 +0000 UTC m=+0.182383493 container died ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 00:28:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9c7cae146652f18d06c6f3c3b54bbbdacc3f83dc100a5cc628974c419a6efcee-merged.mount: Deactivated successfully.
Nov 29 00:28:53 np0005539482 podman[255845]: 2025-11-29 05:28:53.062979537 +0000 UTC m=+0.222959537 container remove ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:28:53 np0005539482 systemd[1]: libpod-conmon-ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3.scope: Deactivated successfully.
Nov 29 00:28:53 np0005539482 podman[255885]: 2025-11-29 05:28:53.227622546 +0000 UTC m=+0.045518029 container create a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:28:53 np0005539482 systemd[1]: Started libpod-conmon-a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586.scope.
Nov 29 00:28:53 np0005539482 podman[255885]: 2025-11-29 05:28:53.2109746 +0000 UTC m=+0.028870103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:28:53 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:53 np0005539482 podman[255885]: 2025-11-29 05:28:53.342335064 +0000 UTC m=+0.160230577 container init a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:28:53 np0005539482 podman[255885]: 2025-11-29 05:28:53.356793231 +0000 UTC m=+0.174688744 container start a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:28:53 np0005539482 podman[255885]: 2025-11-29 05:28:53.360583759 +0000 UTC m=+0.178479272 container attach a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:28:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:54 np0005539482 epic_diffie[255902]: {
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:    "0": [
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:        {
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "devices": [
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "/dev/loop3"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            ],
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_name": "ceph_lv0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_size": "21470642176",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "name": "ceph_lv0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "tags": {
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cluster_name": "ceph",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.crush_device_class": "",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.encrypted": "0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osd_id": "0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.type": "block",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.vdo": "0"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            },
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "type": "block",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "vg_name": "ceph_vg0"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:        }
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:    ],
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:    "1": [
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:        {
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "devices": [
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "/dev/loop4"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            ],
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_name": "ceph_lv1",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_size": "21470642176",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "name": "ceph_lv1",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "tags": {
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cluster_name": "ceph",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.crush_device_class": "",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.encrypted": "0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osd_id": "1",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.type": "block",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.vdo": "0"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            },
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "type": "block",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "vg_name": "ceph_vg1"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:        }
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:    ],
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:    "2": [
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:        {
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "devices": [
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "/dev/loop5"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            ],
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_name": "ceph_lv2",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_size": "21470642176",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "name": "ceph_lv2",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "tags": {
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.cluster_name": "ceph",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.crush_device_class": "",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.encrypted": "0",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osd_id": "2",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.type": "block",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:                "ceph.vdo": "0"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            },
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "type": "block",
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:            "vg_name": "ceph_vg2"
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:        }
Nov 29 00:28:54 np0005539482 epic_diffie[255902]:    ]
Nov 29 00:28:54 np0005539482 epic_diffie[255902]: }
Nov 29 00:28:54 np0005539482 systemd[1]: libpod-a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586.scope: Deactivated successfully.
Nov 29 00:28:54 np0005539482 podman[255885]: 2025-11-29 05:28:54.241845727 +0000 UTC m=+1.059741240 container died a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:28:54 np0005539482 systemd[1]: var-lib-containers-storage-overlay-626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909-merged.mount: Deactivated successfully.
Nov 29 00:28:54 np0005539482 podman[255885]: 2025-11-29 05:28:54.320728672 +0000 UTC m=+1.138624185 container remove a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:28:54 np0005539482 systemd[1]: libpod-conmon-a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586.scope: Deactivated successfully.
Nov 29 00:28:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:28:55 np0005539482 podman[256066]: 2025-11-29 05:28:55.478406009 +0000 UTC m=+0.050484585 container create c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:28:55 np0005539482 systemd[1]: Started libpod-conmon-c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b.scope.
Nov 29 00:28:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:55 np0005539482 podman[256066]: 2025-11-29 05:28:55.463914442 +0000 UTC m=+0.035993038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:28:55 np0005539482 podman[256066]: 2025-11-29 05:28:55.576429049 +0000 UTC m=+0.148507705 container init c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:28:55 np0005539482 podman[256066]: 2025-11-29 05:28:55.583976174 +0000 UTC m=+0.156054750 container start c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:28:55 np0005539482 podman[256066]: 2025-11-29 05:28:55.587408995 +0000 UTC m=+0.159487671 container attach c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:28:55 np0005539482 gifted_moore[256082]: 167 167
Nov 29 00:28:55 np0005539482 systemd[1]: libpod-c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b.scope: Deactivated successfully.
Nov 29 00:28:55 np0005539482 podman[256066]: 2025-11-29 05:28:55.589595185 +0000 UTC m=+0.161673771 container died c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:28:55 np0005539482 systemd[1]: var-lib-containers-storage-overlay-191f823807153d5116ea355375f2753b49a21b645644ad729db56fa9a78f5716-merged.mount: Deactivated successfully.
Nov 29 00:28:55 np0005539482 podman[256066]: 2025-11-29 05:28:55.664373625 +0000 UTC m=+0.236452211 container remove c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:28:55 np0005539482 systemd[1]: libpod-conmon-c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b.scope: Deactivated successfully.
Nov 29 00:28:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:55 np0005539482 podman[256106]: 2025-11-29 05:28:55.861652754 +0000 UTC m=+0.055351649 container create f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:28:55 np0005539482 systemd[1]: Started libpod-conmon-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope.
Nov 29 00:28:55 np0005539482 podman[256106]: 2025-11-29 05:28:55.832753661 +0000 UTC m=+0.026452606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:28:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:28:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:28:55 np0005539482 podman[256106]: 2025-11-29 05:28:55.956897959 +0000 UTC m=+0.150596904 container init f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:28:55 np0005539482 podman[256106]: 2025-11-29 05:28:55.967394793 +0000 UTC m=+0.161093678 container start f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:28:55 np0005539482 podman[256106]: 2025-11-29 05:28:55.971668832 +0000 UTC m=+0.165367817 container attach f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:28:56 np0005539482 gifted_brown[256122]: {
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "osd_id": 0,
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "type": "bluestore"
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:    },
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "osd_id": 1,
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "type": "bluestore"
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:    },
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "osd_id": 2,
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:        "type": "bluestore"
Nov 29 00:28:56 np0005539482 gifted_brown[256122]:    }
Nov 29 00:28:56 np0005539482 gifted_brown[256122]: }
Nov 29 00:28:56 np0005539482 systemd[1]: libpod-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope: Deactivated successfully.
Nov 29 00:28:56 np0005539482 systemd[1]: libpod-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope: Consumed 1.033s CPU time.
Nov 29 00:28:56 np0005539482 podman[256106]: 2025-11-29 05:28:56.988213017 +0000 UTC m=+1.181911912 container died f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:28:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e-merged.mount: Deactivated successfully.
Nov 29 00:28:57 np0005539482 podman[256106]: 2025-11-29 05:28:57.057338505 +0000 UTC m=+1.251037360 container remove f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:28:57 np0005539482 systemd[1]: libpod-conmon-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope: Deactivated successfully.
Nov 29 00:28:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:28:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:28:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:28:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:28:57 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ee574066-3849-4fcf-9706-b8a8c143da4f does not exist
Nov 29 00:28:57 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ac575427-fb0a-4d5a-baab-9defb75c9b88 does not exist
Nov 29 00:28:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:28:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:28:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:28:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:07 np0005539482 podman[256223]: 2025-11-29 05:29:07.0412755 +0000 UTC m=+0.090150917 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 00:29:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:29:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:29:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:29:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:29:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:29:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:29:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:12 np0005539482 podman[256245]: 2025-11-29 05:29:12.074197776 +0000 UTC m=+0.128056069 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:29:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:29:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:29:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:29:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:29:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:29:13.741 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:29:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:19 np0005539482 podman[256272]: 2025-11-29 05:29:19.997129073 +0000 UTC m=+0.057425117 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 00:29:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.956 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.957 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.957 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.957 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.991 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.991 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.992 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.992 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.993 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.993 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.993 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.994 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:29:24 np0005539482 nova_compute[254898]: 2025-11-29 05:29:24.994 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.025 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.025 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.026 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.026 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.027 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:29:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:29:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141797165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.501 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.670 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.671 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.671 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.672 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.749 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.749 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:29:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:25 np0005539482 nova_compute[254898]: 2025-11-29 05:29:25.783 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:29:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:29:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142771401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:29:26 np0005539482 nova_compute[254898]: 2025-11-29 05:29:26.177 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:29:26 np0005539482 nova_compute[254898]: 2025-11-29 05:29:26.185 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:29:26 np0005539482 nova_compute[254898]: 2025-11-29 05:29:26.211 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:29:26 np0005539482 nova_compute[254898]: 2025-11-29 05:29:26.214 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:29:26 np0005539482 nova_compute[254898]: 2025-11-29 05:29:26.214 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:29:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 00:29:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2494357982' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 00:29:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 00:29:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 00:29:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 00:29:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:29:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5780 writes, 24K keys, 5780 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5780 writes, 976 syncs, 5.92 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 00:29:38 np0005539482 podman[256337]: 2025-11-29 05:29:38.003325895 +0000 UTC m=+0.054371016 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 00:29:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:29:41
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'vms', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:29:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:29:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 7055 writes, 29K keys, 7055 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7055 writes, 1300 syncs, 5.43 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 00:29:43 np0005539482 podman[256359]: 2025-11-29 05:29:43.046615782 +0000 UTC m=+0.098721228 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:29:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:29:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5631 writes, 23K keys, 5631 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5631 writes, 860 syncs, 6.55 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 29 00:29:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 00:29:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 00:29:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 00:29:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 00:29:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 00:29:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:50 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 00:29:51 np0005539482 podman[256385]: 2025-11-29 05:29:51.041653137 +0000 UTC m=+0.085175482 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:29:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:29:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:29:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:29:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:29:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:29:58 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d73e39a9-1416-4d7a-94da-38014dc95c33 does not exist
Nov 29 00:29:58 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 382e6138-700e-4431-8fd1-18dd7f6bb828 does not exist
Nov 29 00:29:58 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 8929c39b-2d03-4501-82e1-b65e37189af8 does not exist
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:29:58 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:29:59 np0005539482 podman[256799]: 2025-11-29 05:29:59.301667365 +0000 UTC m=+0.064211235 container create 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:29:59 np0005539482 systemd[1]: Started libpod-conmon-2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a.scope.
Nov 29 00:29:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:29:59 np0005539482 podman[256799]: 2025-11-29 05:29:59.27954463 +0000 UTC m=+0.042088600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:29:59 np0005539482 podman[256799]: 2025-11-29 05:29:59.380453977 +0000 UTC m=+0.142997897 container init 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:29:59 np0005539482 podman[256799]: 2025-11-29 05:29:59.38570856 +0000 UTC m=+0.148252450 container start 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:29:59 np0005539482 podman[256799]: 2025-11-29 05:29:59.389445806 +0000 UTC m=+0.151989696 container attach 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:29:59 np0005539482 eager_curie[256815]: 167 167
Nov 29 00:29:59 np0005539482 systemd[1]: libpod-2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a.scope: Deactivated successfully.
Nov 29 00:29:59 np0005539482 podman[256799]: 2025-11-29 05:29:59.391700279 +0000 UTC m=+0.154244169 container died 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:29:59 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3b213f9ac7cb69d39c00675ad2dcd5f0bc3acb5ad07e106cc295d53b78313302-merged.mount: Deactivated successfully.
Nov 29 00:29:59 np0005539482 podman[256799]: 2025-11-29 05:29:59.43178209 +0000 UTC m=+0.194325990 container remove 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:29:59 np0005539482 systemd[1]: libpod-conmon-2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a.scope: Deactivated successfully.
Nov 29 00:29:59 np0005539482 podman[256839]: 2025-11-29 05:29:59.658801912 +0000 UTC m=+0.066405206 container create a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:29:59 np0005539482 systemd[1]: Started libpod-conmon-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope.
Nov 29 00:29:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:29:59 np0005539482 podman[256839]: 2025-11-29 05:29:59.636874021 +0000 UTC m=+0.044477305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:29:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:29:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:29:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:29:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:29:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:29:59 np0005539482 podman[256839]: 2025-11-29 05:29:59.751399865 +0000 UTC m=+0.159003219 container init a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:29:59 np0005539482 podman[256839]: 2025-11-29 05:29:59.764228964 +0000 UTC m=+0.171832278 container start a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:29:59 np0005539482 podman[256839]: 2025-11-29 05:29:59.769213289 +0000 UTC m=+0.176816563 container attach a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:29:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:29:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:00 np0005539482 beautiful_lichterman[256857]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:30:00 np0005539482 beautiful_lichterman[256857]: --> relative data size: 1.0
Nov 29 00:30:00 np0005539482 beautiful_lichterman[256857]: --> All data devices are unavailable
Nov 29 00:30:00 np0005539482 systemd[1]: libpod-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope: Deactivated successfully.
Nov 29 00:30:00 np0005539482 podman[256839]: 2025-11-29 05:30:00.850772026 +0000 UTC m=+1.258375310 container died a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:30:00 np0005539482 systemd[1]: libpod-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope: Consumed 1.034s CPU time.
Nov 29 00:30:00 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2-merged.mount: Deactivated successfully.
Nov 29 00:30:00 np0005539482 podman[256839]: 2025-11-29 05:30:00.907504605 +0000 UTC m=+1.315107869 container remove a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:30:00 np0005539482 systemd[1]: libpod-conmon-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope: Deactivated successfully.
Nov 29 00:30:01 np0005539482 podman[257040]: 2025-11-29 05:30:01.694282496 +0000 UTC m=+0.052825279 container create 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:30:01 np0005539482 systemd[1]: Started libpod-conmon-64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1.scope.
Nov 29 00:30:01 np0005539482 podman[257040]: 2025-11-29 05:30:01.667709628 +0000 UTC m=+0.026252501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:30:01 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:30:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:01 np0005539482 podman[257040]: 2025-11-29 05:30:01.783312577 +0000 UTC m=+0.141855450 container init 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:30:01 np0005539482 podman[257040]: 2025-11-29 05:30:01.790703219 +0000 UTC m=+0.149246002 container start 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:30:01 np0005539482 podman[257040]: 2025-11-29 05:30:01.794040517 +0000 UTC m=+0.152583320 container attach 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:30:01 np0005539482 admiring_dirac[257057]: 167 167
Nov 29 00:30:01 np0005539482 systemd[1]: libpod-64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1.scope: Deactivated successfully.
Nov 29 00:30:01 np0005539482 podman[257040]: 2025-11-29 05:30:01.798382848 +0000 UTC m=+0.156925631 container died 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:30:01 np0005539482 systemd[1]: var-lib-containers-storage-overlay-227e6aacd3dd4d384ad6b1467b4d877d4f12c658863fc127b8e11bae803f2b8c-merged.mount: Deactivated successfully.
Nov 29 00:30:01 np0005539482 podman[257040]: 2025-11-29 05:30:01.832669175 +0000 UTC m=+0.191211968 container remove 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:30:01 np0005539482 systemd[1]: libpod-conmon-64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1.scope: Deactivated successfully.
Nov 29 00:30:02 np0005539482 podman[257081]: 2025-11-29 05:30:02.034243263 +0000 UTC m=+0.059684818 container create 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:30:02 np0005539482 systemd[1]: Started libpod-conmon-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope.
Nov 29 00:30:02 np0005539482 podman[257081]: 2025-11-29 05:30:02.014996766 +0000 UTC m=+0.040438361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:30:02 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:30:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:02 np0005539482 podman[257081]: 2025-11-29 05:30:02.135192911 +0000 UTC m=+0.160634466 container init 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:30:02 np0005539482 podman[257081]: 2025-11-29 05:30:02.146563976 +0000 UTC m=+0.172005531 container start 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:30:02 np0005539482 podman[257081]: 2025-11-29 05:30:02.150339324 +0000 UTC m=+0.175780919 container attach 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]: {
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:    "0": [
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:        {
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "devices": [
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "/dev/loop3"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            ],
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_name": "ceph_lv0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_size": "21470642176",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "name": "ceph_lv0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "tags": {
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cluster_name": "ceph",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.crush_device_class": "",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.encrypted": "0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osd_id": "0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.type": "block",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.vdo": "0"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            },
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "type": "block",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "vg_name": "ceph_vg0"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:        }
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:    ],
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:    "1": [
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:        {
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "devices": [
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "/dev/loop4"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            ],
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_name": "ceph_lv1",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_size": "21470642176",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "name": "ceph_lv1",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "tags": {
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cluster_name": "ceph",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.crush_device_class": "",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.encrypted": "0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osd_id": "1",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.type": "block",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.vdo": "0"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            },
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "type": "block",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "vg_name": "ceph_vg1"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:        }
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:    ],
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:    "2": [
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:        {
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "devices": [
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "/dev/loop5"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            ],
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_name": "ceph_lv2",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_size": "21470642176",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "name": "ceph_lv2",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "tags": {
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.cluster_name": "ceph",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.crush_device_class": "",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.encrypted": "0",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osd_id": "2",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.type": "block",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:                "ceph.vdo": "0"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            },
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "type": "block",
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:            "vg_name": "ceph_vg2"
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:        }
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]:    ]
Nov 29 00:30:02 np0005539482 nifty_hawking[257098]: }
Nov 29 00:30:02 np0005539482 systemd[1]: libpod-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope: Deactivated successfully.
Nov 29 00:30:02 np0005539482 conmon[257098]: conmon 62e856d2f885a439c598 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope/container/memory.events
Nov 29 00:30:02 np0005539482 podman[257081]: 2025-11-29 05:30:02.92275501 +0000 UTC m=+0.948196565 container died 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:30:02 np0005539482 systemd[1]: var-lib-containers-storage-overlay-2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a-merged.mount: Deactivated successfully.
Nov 29 00:30:02 np0005539482 podman[257081]: 2025-11-29 05:30:02.982116491 +0000 UTC m=+1.007558056 container remove 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:30:02 np0005539482 systemd[1]: libpod-conmon-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope: Deactivated successfully.
Nov 29 00:30:03 np0005539482 podman[257258]: 2025-11-29 05:30:03.627661826 +0000 UTC m=+0.039984300 container create 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:30:03 np0005539482 systemd[1]: Started libpod-conmon-661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f.scope.
Nov 29 00:30:03 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:30:03 np0005539482 podman[257258]: 2025-11-29 05:30:03.699507438 +0000 UTC m=+0.111829932 container init 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:30:03 np0005539482 podman[257258]: 2025-11-29 05:30:03.61145893 +0000 UTC m=+0.023781424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:30:03 np0005539482 podman[257258]: 2025-11-29 05:30:03.705858365 +0000 UTC m=+0.118180839 container start 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:30:03 np0005539482 podman[257258]: 2025-11-29 05:30:03.708941637 +0000 UTC m=+0.121264111 container attach 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:30:03 np0005539482 silly_euler[257274]: 167 167
Nov 29 00:30:03 np0005539482 systemd[1]: libpod-661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f.scope: Deactivated successfully.
Nov 29 00:30:03 np0005539482 podman[257258]: 2025-11-29 05:30:03.711188919 +0000 UTC m=+0.123511393 container died 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:30:03 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c4c54abd23e245d0e2a6002b36b6ec6105f3c9b6e3a16f86c762ba8228902301-merged.mount: Deactivated successfully.
Nov 29 00:30:03 np0005539482 podman[257258]: 2025-11-29 05:30:03.740760387 +0000 UTC m=+0.153082861 container remove 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:30:03 np0005539482 systemd[1]: libpod-conmon-661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f.scope: Deactivated successfully.
Nov 29 00:30:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:03 np0005539482 podman[257299]: 2025-11-29 05:30:03.872373018 +0000 UTC m=+0.030096261 container create 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:30:03 np0005539482 systemd[1]: Started libpod-conmon-139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6.scope.
Nov 29 00:30:03 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:30:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:30:03 np0005539482 podman[257299]: 2025-11-29 05:30:03.926892736 +0000 UTC m=+0.084615999 container init 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:30:03 np0005539482 podman[257299]: 2025-11-29 05:30:03.932889886 +0000 UTC m=+0.090613129 container start 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:30:03 np0005539482 podman[257299]: 2025-11-29 05:30:03.935843304 +0000 UTC m=+0.093566547 container attach 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 00:30:03 np0005539482 podman[257299]: 2025-11-29 05:30:03.859419997 +0000 UTC m=+0.017143260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]: {
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "osd_id": 0,
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "type": "bluestore"
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:    },
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "osd_id": 1,
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "type": "bluestore"
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:    },
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "osd_id": 2,
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:        "type": "bluestore"
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]:    }
Nov 29 00:30:04 np0005539482 upbeat_keldysh[257316]: }
Nov 29 00:30:04 np0005539482 systemd[1]: libpod-139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6.scope: Deactivated successfully.
Nov 29 00:30:04 np0005539482 podman[257299]: 2025-11-29 05:30:04.847709885 +0000 UTC m=+1.005433128 container died 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:30:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24-merged.mount: Deactivated successfully.
Nov 29 00:30:04 np0005539482 podman[257299]: 2025-11-29 05:30:04.908954199 +0000 UTC m=+1.066677452 container remove 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:30:04 np0005539482 systemd[1]: libpod-conmon-139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6.scope: Deactivated successfully.
Nov 29 00:30:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:30:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:30:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:30:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:30:04 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f0e72c40-9a88-4257-89d0-a12d3963f939 does not exist
Nov 29 00:30:04 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 9da554f6-cd45-4b4e-9f12-932cb921bf97 does not exist
Nov 29 00:30:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:30:05 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:30:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:07 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 29 00:30:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:07.985555) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:30:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 29 00:30:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394207985615, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1497, "num_deletes": 251, "total_data_size": 2371344, "memory_usage": 2410304, "flush_reason": "Manual Compaction"}
Nov 29 00:30:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208009876, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2316672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14827, "largest_seqno": 16323, "table_properties": {"data_size": 2309752, "index_size": 3991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14339, "raw_average_key_size": 19, "raw_value_size": 2295825, "raw_average_value_size": 3157, "num_data_blocks": 183, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394054, "oldest_key_time": 1764394054, "file_creation_time": 1764394207, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 24349 microseconds, and 11891 cpu microseconds.
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.009915) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2316672 bytes OK
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.009934) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.011933) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.012003) EVENT_LOG_v1 {"time_micros": 1764394208011989, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.012039) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2364766, prev total WAL file size 2364766, number of live WAL files 2.
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.013540) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2262KB)], [35(6993KB)]
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208013598, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9477879, "oldest_snapshot_seqno": -1}
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3996 keys, 7692678 bytes, temperature: kUnknown
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208063776, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7692678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7663763, "index_size": 17797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97600, "raw_average_key_size": 24, "raw_value_size": 7589305, "raw_average_value_size": 1899, "num_data_blocks": 754, "num_entries": 3996, "num_filter_entries": 3996, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.064147) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7692678 bytes
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.065612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.4 rd, 152.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4510, records dropped: 514 output_compression: NoCompression
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.065648) EVENT_LOG_v1 {"time_micros": 1764394208065636, "job": 16, "event": "compaction_finished", "compaction_time_micros": 50305, "compaction_time_cpu_micros": 21340, "output_level": 6, "num_output_files": 1, "total_output_size": 7692678, "num_input_records": 4510, "num_output_records": 3996, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208066121, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208067430, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.013429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:30:08 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:30:09 np0005539482 podman[257413]: 2025-11-29 05:30:09.019790477 +0000 UTC m=+0.067622274 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd)
Nov 29 00:30:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:30:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:30:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:30:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:30:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:30:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:30:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:30:13.741 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:30:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:30:13.741 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:30:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:30:13.742 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:30:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:14 np0005539482 podman[257434]: 2025-11-29 05:30:14.114229673 +0000 UTC m=+0.150380420 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 00:30:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:30:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630534927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:30:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:30:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630534927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:30:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:22 np0005539482 podman[257462]: 2025-11-29 05:30:22.050411479 +0000 UTC m=+0.087410524 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:30:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.206 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.207 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.224 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.225 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.225 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.236 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.236 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.237 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.237 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.237 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.989 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.990 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.990 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.991 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:30:26 np0005539482 nova_compute[254898]: 2025-11-29 05:30:26.991 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:30:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:30:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121882001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.478 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.667 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.668 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.669 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.669 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.766 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.766 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:30:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:27 np0005539482 nova_compute[254898]: 2025-11-29 05:30:27.784 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:30:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:30:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735413006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:30:28 np0005539482 nova_compute[254898]: 2025-11-29 05:30:28.294 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:30:28 np0005539482 nova_compute[254898]: 2025-11-29 05:30:28.299 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:30:28 np0005539482 nova_compute[254898]: 2025-11-29 05:30:28.317 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:30:28 np0005539482 nova_compute[254898]: 2025-11-29 05:30:28.319 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:30:28 np0005539482 nova_compute[254898]: 2025-11-29 05:30:28.320 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:30:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:39 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:30:39.205 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:30:39 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:30:39.205 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:30:39 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:30:39.207 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:30:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:40 np0005539482 podman[257530]: 2025-11-29 05:30:40.040429005 +0000 UTC m=+0.083832621 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:30:41
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root']
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:30:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:45 np0005539482 podman[257550]: 2025-11-29 05:30:45.014982322 +0000 UTC m=+0.072568039 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 00:30:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:30:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:30:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:30:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:30:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:30:53 np0005539482 podman[257576]: 2025-11-29 05:30:53.004907656 +0000 UTC m=+0.050979256 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 00:30:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:30:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:30:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:30:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:30:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:05 np0005539482 podman[257770]: 2025-11-29 05:31:05.854731458 +0000 UTC m=+0.055931864 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:31:05 np0005539482 podman[257770]: 2025-11-29 05:31:05.959914523 +0000 UTC m=+0.161115009 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:31:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:31:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:31:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:07 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev c2aac0cd-a40a-4fc6-9e67-ac6c3c0d7731 does not exist
Nov 29 00:31:07 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 1871e192-ed9d-4549-8627-7d33a399a5b6 does not exist
Nov 29 00:31:07 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6871a3b8-082b-44d9-9b8f-7c19e53de6be does not exist
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:31:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:31:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:31:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:31:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:08 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:31:08 np0005539482 podman[258197]: 2025-11-29 05:31:08.384209209 +0000 UTC m=+0.065415003 container create 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:31:08 np0005539482 systemd[1]: Started libpod-conmon-6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955.scope.
Nov 29 00:31:08 np0005539482 podman[258197]: 2025-11-29 05:31:08.356033468 +0000 UTC m=+0.037239292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:31:08 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:31:08 np0005539482 podman[258197]: 2025-11-29 05:31:08.476890762 +0000 UTC m=+0.158096526 container init 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:31:08 np0005539482 podman[258197]: 2025-11-29 05:31:08.489068687 +0000 UTC m=+0.170274471 container start 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:31:08 np0005539482 podman[258197]: 2025-11-29 05:31:08.49250122 +0000 UTC m=+0.173706994 container attach 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:31:08 np0005539482 vigilant_goldwasser[258213]: 167 167
Nov 29 00:31:08 np0005539482 systemd[1]: libpod-6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955.scope: Deactivated successfully.
Nov 29 00:31:08 np0005539482 podman[258197]: 2025-11-29 05:31:08.498086115 +0000 UTC m=+0.179291889 container died 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:31:08 np0005539482 systemd[1]: var-lib-containers-storage-overlay-efd1d9689d6710bced8604af73539b3e3079ec30e0bb10ddde31b3e5c21b4a74-merged.mount: Deactivated successfully.
Nov 29 00:31:08 np0005539482 podman[258197]: 2025-11-29 05:31:08.549371656 +0000 UTC m=+0.230577400 container remove 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:31:08 np0005539482 systemd[1]: libpod-conmon-6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955.scope: Deactivated successfully.
Nov 29 00:31:08 np0005539482 podman[258238]: 2025-11-29 05:31:08.828254983 +0000 UTC m=+0.074691068 container create 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 00:31:08 np0005539482 systemd[1]: Started libpod-conmon-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope.
Nov 29 00:31:08 np0005539482 podman[258238]: 2025-11-29 05:31:08.797494069 +0000 UTC m=+0.043930164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:31:08 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:31:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:08 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:08 np0005539482 podman[258238]: 2025-11-29 05:31:08.950961222 +0000 UTC m=+0.197397277 container init 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:31:08 np0005539482 podman[258238]: 2025-11-29 05:31:08.968345152 +0000 UTC m=+0.214781207 container start 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:31:08 np0005539482 podman[258238]: 2025-11-29 05:31:08.973343654 +0000 UTC m=+0.219779799 container attach 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:31:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:10 np0005539482 eloquent_hertz[258255]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:31:10 np0005539482 eloquent_hertz[258255]: --> relative data size: 1.0
Nov 29 00:31:10 np0005539482 eloquent_hertz[258255]: --> All data devices are unavailable
Nov 29 00:31:10 np0005539482 systemd[1]: libpod-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope: Deactivated successfully.
Nov 29 00:31:10 np0005539482 podman[258238]: 2025-11-29 05:31:10.102401862 +0000 UTC m=+1.348837917 container died 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:31:10 np0005539482 systemd[1]: libpod-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope: Consumed 1.087s CPU time.
Nov 29 00:31:10 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8-merged.mount: Deactivated successfully.
Nov 29 00:31:10 np0005539482 podman[258238]: 2025-11-29 05:31:10.184613291 +0000 UTC m=+1.431049366 container remove 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:31:10 np0005539482 systemd[1]: libpod-conmon-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope: Deactivated successfully.
Nov 29 00:31:10 np0005539482 podman[258285]: 2025-11-29 05:31:10.238120626 +0000 UTC m=+0.096191158 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 00:31:11 np0005539482 podman[258455]: 2025-11-29 05:31:11.089708461 +0000 UTC m=+0.066163493 container create 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:31:11 np0005539482 podman[258455]: 2025-11-29 05:31:11.058192238 +0000 UTC m=+0.034647280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:31:11 np0005539482 systemd[1]: Started libpod-conmon-9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe.scope.
Nov 29 00:31:11 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:31:11 np0005539482 podman[258455]: 2025-11-29 05:31:11.208405222 +0000 UTC m=+0.184860234 container init 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:31:11 np0005539482 podman[258455]: 2025-11-29 05:31:11.220989057 +0000 UTC m=+0.197444089 container start 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:31:11 np0005539482 podman[258455]: 2025-11-29 05:31:11.225761983 +0000 UTC m=+0.202217005 container attach 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:31:11 np0005539482 gifted_carver[258472]: 167 167
Nov 29 00:31:11 np0005539482 systemd[1]: libpod-9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe.scope: Deactivated successfully.
Nov 29 00:31:11 np0005539482 podman[258455]: 2025-11-29 05:31:11.233159591 +0000 UTC m=+0.209614613 container died 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:31:11 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ba99f1919b6833fc97561f031767b340687fa1a5195e13cf91bcf9013e097bf4-merged.mount: Deactivated successfully.
Nov 29 00:31:11 np0005539482 podman[258455]: 2025-11-29 05:31:11.281318226 +0000 UTC m=+0.257773258 container remove 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:31:11 np0005539482 systemd[1]: libpod-conmon-9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe.scope: Deactivated successfully.
Nov 29 00:31:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:31:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:31:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:31:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:31:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:31:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:31:11 np0005539482 podman[258496]: 2025-11-29 05:31:11.503654506 +0000 UTC m=+0.059118431 container create 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:31:11 np0005539482 systemd[1]: Started libpod-conmon-4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792.scope.
Nov 29 00:31:11 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:31:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:11 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:11 np0005539482 podman[258496]: 2025-11-29 05:31:11.485980139 +0000 UTC m=+0.041444084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:31:11 np0005539482 podman[258496]: 2025-11-29 05:31:11.58774653 +0000 UTC m=+0.143210475 container init 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:31:11 np0005539482 podman[258496]: 2025-11-29 05:31:11.593936901 +0000 UTC m=+0.149400826 container start 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:31:11 np0005539482 podman[258496]: 2025-11-29 05:31:11.597820564 +0000 UTC m=+0.153284489 container attach 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:31:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:12 np0005539482 zen_swartz[258512]: {
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:    "0": [
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:        {
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "devices": [
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "/dev/loop3"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            ],
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_name": "ceph_lv0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_size": "21470642176",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "name": "ceph_lv0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "tags": {
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cluster_name": "ceph",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.crush_device_class": "",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.encrypted": "0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osd_id": "0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.type": "block",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.vdo": "0"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            },
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "type": "block",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "vg_name": "ceph_vg0"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:        }
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:    ],
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:    "1": [
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:        {
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "devices": [
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "/dev/loop4"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            ],
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_name": "ceph_lv1",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_size": "21470642176",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "name": "ceph_lv1",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "tags": {
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cluster_name": "ceph",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.crush_device_class": "",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.encrypted": "0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osd_id": "1",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.type": "block",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.vdo": "0"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            },
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "type": "block",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "vg_name": "ceph_vg1"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:        }
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:    ],
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:    "2": [
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:        {
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "devices": [
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "/dev/loop5"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            ],
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_name": "ceph_lv2",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_size": "21470642176",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "name": "ceph_lv2",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "tags": {
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.cluster_name": "ceph",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.crush_device_class": "",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.encrypted": "0",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osd_id": "2",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.type": "block",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:                "ceph.vdo": "0"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            },
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "type": "block",
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:            "vg_name": "ceph_vg2"
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:        }
Nov 29 00:31:12 np0005539482 zen_swartz[258512]:    ]
Nov 29 00:31:12 np0005539482 zen_swartz[258512]: }
Nov 29 00:31:12 np0005539482 systemd[1]: libpod-4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792.scope: Deactivated successfully.
Nov 29 00:31:12 np0005539482 podman[258496]: 2025-11-29 05:31:12.32957758 +0000 UTC m=+0.885041535 container died 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:31:12 np0005539482 systemd[1]: var-lib-containers-storage-overlay-79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61-merged.mount: Deactivated successfully.
Nov 29 00:31:12 np0005539482 podman[258496]: 2025-11-29 05:31:12.389972551 +0000 UTC m=+0.945436506 container remove 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:31:12 np0005539482 systemd[1]: libpod-conmon-4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792.scope: Deactivated successfully.
Nov 29 00:31:13 np0005539482 podman[258677]: 2025-11-29 05:31:13.109205533 +0000 UTC m=+0.069149403 container create 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:31:13 np0005539482 systemd[1]: Started libpod-conmon-5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6.scope.
Nov 29 00:31:13 np0005539482 podman[258677]: 2025-11-29 05:31:13.06522677 +0000 UTC m=+0.025170670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:31:13 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:31:13 np0005539482 podman[258677]: 2025-11-29 05:31:13.212695667 +0000 UTC m=+0.172639557 container init 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:31:13 np0005539482 podman[258677]: 2025-11-29 05:31:13.219581004 +0000 UTC m=+0.179524874 container start 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 00:31:13 np0005539482 awesome_chebyshev[258694]: 167 167
Nov 29 00:31:13 np0005539482 systemd[1]: libpod-5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6.scope: Deactivated successfully.
Nov 29 00:31:13 np0005539482 podman[258677]: 2025-11-29 05:31:13.245633614 +0000 UTC m=+0.205577514 container attach 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:31:13 np0005539482 podman[258677]: 2025-11-29 05:31:13.246539356 +0000 UTC m=+0.206483226 container died 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:31:13 np0005539482 systemd[1]: var-lib-containers-storage-overlay-63d0ca1d97ffac121f4dab66a0b5159fcb49967a877c3fd09f0f2601f8f7f5e5-merged.mount: Deactivated successfully.
Nov 29 00:31:13 np0005539482 podman[258677]: 2025-11-29 05:31:13.421637812 +0000 UTC m=+0.381581692 container remove 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:31:13 np0005539482 systemd[1]: libpod-conmon-5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6.scope: Deactivated successfully.
Nov 29 00:31:13 np0005539482 podman[258719]: 2025-11-29 05:31:13.63270173 +0000 UTC m=+0.052714217 container create dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:31:13 np0005539482 systemd[1]: Started libpod-conmon-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope.
Nov 29 00:31:13 np0005539482 podman[258719]: 2025-11-29 05:31:13.601006512 +0000 UTC m=+0.021018979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:31:13 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:31:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:31:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:31:13.742 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:31:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:31:13.744 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:31:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:31:13.745 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:31:13 np0005539482 podman[258719]: 2025-11-29 05:31:13.745314564 +0000 UTC m=+0.165327051 container init dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:31:13 np0005539482 podman[258719]: 2025-11-29 05:31:13.756563436 +0000 UTC m=+0.176575923 container start dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 00:31:13 np0005539482 podman[258719]: 2025-11-29 05:31:13.771412826 +0000 UTC m=+0.191425353 container attach dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:31:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:31:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2105041158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:31:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:31:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2105041158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:31:14 np0005539482 clever_carver[258736]: {
Nov 29 00:31:14 np0005539482 clever_carver[258736]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "osd_id": 0,
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "type": "bluestore"
Nov 29 00:31:14 np0005539482 clever_carver[258736]:    },
Nov 29 00:31:14 np0005539482 clever_carver[258736]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "osd_id": 1,
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "type": "bluestore"
Nov 29 00:31:14 np0005539482 clever_carver[258736]:    },
Nov 29 00:31:14 np0005539482 clever_carver[258736]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "osd_id": 2,
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:31:14 np0005539482 clever_carver[258736]:        "type": "bluestore"
Nov 29 00:31:14 np0005539482 clever_carver[258736]:    }
Nov 29 00:31:14 np0005539482 clever_carver[258736]: }
Nov 29 00:31:14 np0005539482 systemd[1]: libpod-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope: Deactivated successfully.
Nov 29 00:31:14 np0005539482 systemd[1]: libpod-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope: Consumed 1.012s CPU time.
Nov 29 00:31:14 np0005539482 podman[258769]: 2025-11-29 05:31:14.808413427 +0000 UTC m=+0.023910461 container died dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:31:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:15 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac-merged.mount: Deactivated successfully.
Nov 29 00:31:15 np0005539482 podman[258769]: 2025-11-29 05:31:15.191049464 +0000 UTC m=+0.406546478 container remove dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:31:15 np0005539482 systemd[1]: libpod-conmon-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope: Deactivated successfully.
Nov 29 00:31:15 np0005539482 podman[258784]: 2025-11-29 05:31:15.223030428 +0000 UTC m=+0.135438728 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 00:31:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:31:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:31:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:15 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7efd5189-295a-4b95-b16f-43c68a2520aa does not exist
Nov 29 00:31:15 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 2f481f00-c95c-40ed-9e84-ee471495af00 does not exist
Nov 29 00:31:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:16 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:16 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:31:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:24 np0005539482 podman[258863]: 2025-11-29 05:31:24.005395472 +0000 UTC m=+0.054865078 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 00:31:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.320 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.321 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.321 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.321 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.950 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:31:26 np0005539482 nova_compute[254898]: 2025-11-29 05:31:26.985 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:31:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:27 np0005539482 nova_compute[254898]: 2025-11-29 05:31:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:27 np0005539482 nova_compute[254898]: 2025-11-29 05:31:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.005 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.005 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:31:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:31:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237359323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.438 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.575 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.577 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.577 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.577 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.664 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.664 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:31:28 np0005539482 nova_compute[254898]: 2025-11-29 05:31:28.693 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:31:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:31:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999455183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:31:29 np0005539482 nova_compute[254898]: 2025-11-29 05:31:29.101 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:31:29 np0005539482 nova_compute[254898]: 2025-11-29 05:31:29.106 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:31:29 np0005539482 nova_compute[254898]: 2025-11-29 05:31:29.140 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:31:29 np0005539482 nova_compute[254898]: 2025-11-29 05:31:29.142 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:31:29 np0005539482 nova_compute[254898]: 2025-11-29 05:31:29.142 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:31:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:30 np0005539482 nova_compute[254898]: 2025-11-29 05:31:30.142 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:31:30 np0005539482 nova_compute[254898]: 2025-11-29 05:31:30.142 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:31:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:40 np0005539482 podman[258926]: 2025-11-29 05:31:40.997169706 +0000 UTC m=+0.054295495 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:31:41
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups']
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:31:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:46 np0005539482 podman[258947]: 2025-11-29 05:31:46.035054641 +0000 UTC m=+0.091026613 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 00:31:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:31:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:31:54 np0005539482 podman[258973]: 2025-11-29 05:31:54.994395657 +0000 UTC m=+0.049533139 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 00:31:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:31:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:32:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:32:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:32:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:32:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:32:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:32:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:12 np0005539482 podman[258994]: 2025-11-29 05:32:12.047907545 +0000 UTC m=+0.091988387 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 00:32:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:32:13.745 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:32:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:32:13.747 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:32:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:32:13.748 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:32:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:32:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/159069613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:32:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:32:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/159069613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:32:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:32:16 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 442a2dd5-4db3-4139-ad69-183fbdc38442 does not exist
Nov 29 00:32:16 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7a036b06-1f48-44f8-819c-a74a40c5b33b does not exist
Nov 29 00:32:16 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f21b6119-759e-4ea2-9cef-385c02e5b859 does not exist
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:32:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:32:16 np0005539482 podman[259173]: 2025-11-29 05:32:16.34935213 +0000 UTC m=+0.116885879 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 00:32:16 np0005539482 podman[259315]: 2025-11-29 05:32:16.743170479 +0000 UTC m=+0.036454753 container create 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:32:16 np0005539482 systemd[1]: Started libpod-conmon-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope.
Nov 29 00:32:16 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:32:16 np0005539482 podman[259315]: 2025-11-29 05:32:16.72706773 +0000 UTC m=+0.020352024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:32:16 np0005539482 podman[259315]: 2025-11-29 05:32:16.82505065 +0000 UTC m=+0.118334944 container init 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:32:16 np0005539482 podman[259315]: 2025-11-29 05:32:16.831829535 +0000 UTC m=+0.125113799 container start 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:32:16 np0005539482 podman[259315]: 2025-11-29 05:32:16.834468588 +0000 UTC m=+0.127752942 container attach 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:32:16 np0005539482 systemd[1]: libpod-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope: Deactivated successfully.
Nov 29 00:32:16 np0005539482 brave_wilson[259331]: 167 167
Nov 29 00:32:16 np0005539482 conmon[259331]: conmon 24c9575a8029f14d68ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope/container/memory.events
Nov 29 00:32:16 np0005539482 podman[259315]: 2025-11-29 05:32:16.841947549 +0000 UTC m=+0.135231813 container died 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 00:32:16 np0005539482 systemd[1]: var-lib-containers-storage-overlay-49c3c8d8b2d6db2ae0eacb5e749098ecf53d8b9eb6fb85b91eb8b1345f978952-merged.mount: Deactivated successfully.
Nov 29 00:32:16 np0005539482 podman[259315]: 2025-11-29 05:32:16.879239731 +0000 UTC m=+0.172524005 container remove 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:32:16 np0005539482 systemd[1]: libpod-conmon-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope: Deactivated successfully.
Nov 29 00:32:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:32:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:32:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:32:17 np0005539482 podman[259356]: 2025-11-29 05:32:17.071433972 +0000 UTC m=+0.059083752 container create d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:32:17 np0005539482 systemd[1]: Started libpod-conmon-d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b.scope.
Nov 29 00:32:17 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:32:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:17 np0005539482 podman[259356]: 2025-11-29 05:32:17.051004207 +0000 UTC m=+0.038653957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:32:17 np0005539482 podman[259356]: 2025-11-29 05:32:17.162362782 +0000 UTC m=+0.150012572 container init d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:32:17 np0005539482 podman[259356]: 2025-11-29 05:32:17.175435578 +0000 UTC m=+0.163085318 container start d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:32:17 np0005539482 podman[259356]: 2025-11-29 05:32:17.178728157 +0000 UTC m=+0.166377927 container attach d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:32:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:18 np0005539482 crazy_bhabha[259373]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:32:18 np0005539482 crazy_bhabha[259373]: --> relative data size: 1.0
Nov 29 00:32:18 np0005539482 crazy_bhabha[259373]: --> All data devices are unavailable
Nov 29 00:32:18 np0005539482 systemd[1]: libpod-d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b.scope: Deactivated successfully.
Nov 29 00:32:18 np0005539482 podman[259356]: 2025-11-29 05:32:18.147657821 +0000 UTC m=+1.135307601 container died d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:32:18 np0005539482 systemd[1]: var-lib-containers-storage-overlay-546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57-merged.mount: Deactivated successfully.
Nov 29 00:32:18 np0005539482 podman[259356]: 2025-11-29 05:32:18.195584081 +0000 UTC m=+1.183233821 container remove d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:32:18 np0005539482 systemd[1]: libpod-conmon-d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b.scope: Deactivated successfully.
Nov 29 00:32:18 np0005539482 podman[259554]: 2025-11-29 05:32:18.742023382 +0000 UTC m=+0.045156623 container create 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 00:32:18 np0005539482 systemd[1]: Started libpod-conmon-5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69.scope.
Nov 29 00:32:18 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:32:18 np0005539482 podman[259554]: 2025-11-29 05:32:18.804144115 +0000 UTC m=+0.107277386 container init 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:32:18 np0005539482 podman[259554]: 2025-11-29 05:32:18.81506754 +0000 UTC m=+0.118200781 container start 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:32:18 np0005539482 podman[259554]: 2025-11-29 05:32:18.818113293 +0000 UTC m=+0.121246554 container attach 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:32:18 np0005539482 podman[259554]: 2025-11-29 05:32:18.723197496 +0000 UTC m=+0.026330797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:32:18 np0005539482 pensive_carver[259570]: 167 167
Nov 29 00:32:18 np0005539482 systemd[1]: libpod-5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69.scope: Deactivated successfully.
Nov 29 00:32:18 np0005539482 podman[259554]: 2025-11-29 05:32:18.822407287 +0000 UTC m=+0.125540528 container died 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:32:18 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f6d660fb10cb3624408b8744267b52a5c86210999eb2d4e0275733c3083931e5-merged.mount: Deactivated successfully.
Nov 29 00:32:18 np0005539482 podman[259554]: 2025-11-29 05:32:18.849650857 +0000 UTC m=+0.152784098 container remove 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:32:18 np0005539482 systemd[1]: libpod-conmon-5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69.scope: Deactivated successfully.
Nov 29 00:32:19 np0005539482 podman[259596]: 2025-11-29 05:32:19.018374988 +0000 UTC m=+0.039944316 container create 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:32:19 np0005539482 systemd[1]: Started libpod-conmon-86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef.scope.
Nov 29 00:32:19 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:32:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:19 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:19 np0005539482 podman[259596]: 2025-11-29 05:32:18.99900388 +0000 UTC m=+0.020573178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:32:19 np0005539482 podman[259596]: 2025-11-29 05:32:19.11101215 +0000 UTC m=+0.132581448 container init 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:32:19 np0005539482 podman[259596]: 2025-11-29 05:32:19.120053499 +0000 UTC m=+0.141622827 container start 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:32:19 np0005539482 podman[259596]: 2025-11-29 05:32:19.12508335 +0000 UTC m=+0.146652658 container attach 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]: {
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:    "0": [
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:        {
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "devices": [
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "/dev/loop3"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            ],
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_name": "ceph_lv0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_size": "21470642176",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "name": "ceph_lv0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "tags": {
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cluster_name": "ceph",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.crush_device_class": "",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.encrypted": "0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osd_id": "0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.type": "block",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.vdo": "0"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            },
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "type": "block",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "vg_name": "ceph_vg0"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:        }
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:    ],
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:    "1": [
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:        {
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "devices": [
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "/dev/loop4"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            ],
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_name": "ceph_lv1",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_size": "21470642176",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "name": "ceph_lv1",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "tags": {
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cluster_name": "ceph",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.crush_device_class": "",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.encrypted": "0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osd_id": "1",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.type": "block",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.vdo": "0"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            },
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "type": "block",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "vg_name": "ceph_vg1"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:        }
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:    ],
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:    "2": [
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:        {
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "devices": [
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "/dev/loop5"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            ],
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_name": "ceph_lv2",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_size": "21470642176",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "name": "ceph_lv2",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "tags": {
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.cluster_name": "ceph",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.crush_device_class": "",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.encrypted": "0",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osd_id": "2",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.type": "block",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:                "ceph.vdo": "0"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            },
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "type": "block",
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:            "vg_name": "ceph_vg2"
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:        }
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]:    ]
Nov 29 00:32:19 np0005539482 trusting_mendel[259612]: }
Nov 29 00:32:19 np0005539482 systemd[1]: libpod-86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef.scope: Deactivated successfully.
Nov 29 00:32:19 np0005539482 podman[259596]: 2025-11-29 05:32:19.820609559 +0000 UTC m=+0.842178847 container died 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 00:32:19 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6-merged.mount: Deactivated successfully.
Nov 29 00:32:19 np0005539482 podman[259596]: 2025-11-29 05:32:19.879833032 +0000 UTC m=+0.901402350 container remove 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 00:32:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:19 np0005539482 systemd[1]: libpod-conmon-86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef.scope: Deactivated successfully.
Nov 29 00:32:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:20 np0005539482 podman[259775]: 2025-11-29 05:32:20.554855575 +0000 UTC m=+0.044821885 container create ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:32:20 np0005539482 systemd[1]: Started libpod-conmon-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope.
Nov 29 00:32:20 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:32:20 np0005539482 podman[259775]: 2025-11-29 05:32:20.535592329 +0000 UTC m=+0.025558689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:32:20 np0005539482 podman[259775]: 2025-11-29 05:32:20.631606451 +0000 UTC m=+0.121572761 container init ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:32:20 np0005539482 podman[259775]: 2025-11-29 05:32:20.637054413 +0000 UTC m=+0.127020723 container start ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:32:20 np0005539482 podman[259775]: 2025-11-29 05:32:20.639907522 +0000 UTC m=+0.129873832 container attach ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:32:20 np0005539482 condescending_ganguly[259791]: 167 167
Nov 29 00:32:20 np0005539482 systemd[1]: libpod-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope: Deactivated successfully.
Nov 29 00:32:20 np0005539482 conmon[259791]: conmon ca1dd3923f5dd64c4051 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope/container/memory.events
Nov 29 00:32:20 np0005539482 podman[259775]: 2025-11-29 05:32:20.643354836 +0000 UTC m=+0.133321176 container died ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:32:20 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e48382776c319aea0436391fda6ffc3edd22b1a084e46db12967643aab6a28ca-merged.mount: Deactivated successfully.
Nov 29 00:32:20 np0005539482 podman[259775]: 2025-11-29 05:32:20.680122725 +0000 UTC m=+0.170089025 container remove ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:32:20 np0005539482 systemd[1]: libpod-conmon-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope: Deactivated successfully.
Nov 29 00:32:20 np0005539482 podman[259816]: 2025-11-29 05:32:20.843892548 +0000 UTC m=+0.047103821 container create 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:32:20 np0005539482 systemd[1]: Started libpod-conmon-7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d.scope.
Nov 29 00:32:20 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:32:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:32:20 np0005539482 podman[259816]: 2025-11-29 05:32:20.915754516 +0000 UTC m=+0.118965799 container init 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:32:20 np0005539482 podman[259816]: 2025-11-29 05:32:20.921753312 +0000 UTC m=+0.124964575 container start 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:32:20 np0005539482 podman[259816]: 2025-11-29 05:32:20.828425904 +0000 UTC m=+0.031637197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:32:20 np0005539482 podman[259816]: 2025-11-29 05:32:20.924955929 +0000 UTC m=+0.128167222 container attach 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]: {
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "osd_id": 0,
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "type": "bluestore"
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:    },
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "osd_id": 1,
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "type": "bluestore"
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:    },
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "osd_id": 2,
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:        "type": "bluestore"
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]:    }
Nov 29 00:32:21 np0005539482 goofy_knuth[259833]: }
Nov 29 00:32:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:21 np0005539482 systemd[1]: libpod-7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d.scope: Deactivated successfully.
Nov 29 00:32:21 np0005539482 podman[259867]: 2025-11-29 05:32:21.9735005 +0000 UTC m=+0.044413416 container died 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:32:21 np0005539482 systemd[1]: var-lib-containers-storage-overlay-791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f-merged.mount: Deactivated successfully.
Nov 29 00:32:22 np0005539482 podman[259867]: 2025-11-29 05:32:22.025218681 +0000 UTC m=+0.096131497 container remove 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:32:22 np0005539482 systemd[1]: libpod-conmon-7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d.scope: Deactivated successfully.
Nov 29 00:32:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:32:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:32:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:32:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:32:22 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 41f4b115-5b63-4fe2-b6d4-2f47647f87f0 does not exist
Nov 29 00:32:22 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 61c127a9-4e64-44ba-8f1d-21774a3242da does not exist
Nov 29 00:32:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:32:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:32:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:25 np0005539482 nova_compute[254898]: 2025-11-29 05:32:25.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:25 np0005539482 nova_compute[254898]: 2025-11-29 05:32:25.971 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:26 np0005539482 podman[259932]: 2025-11-29 05:32:26.033140024 +0000 UTC m=+0.076735528 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:32:26 np0005539482 nova_compute[254898]: 2025-11-29 05:32:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:27 np0005539482 nova_compute[254898]: 2025-11-29 05:32:27.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:27 np0005539482 nova_compute[254898]: 2025-11-29 05:32:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:27 np0005539482 nova_compute[254898]: 2025-11-29 05:32:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:28 np0005539482 nova_compute[254898]: 2025-11-29 05:32:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:28 np0005539482 nova_compute[254898]: 2025-11-29 05:32:28.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:32:28 np0005539482 nova_compute[254898]: 2025-11-29 05:32:28.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.224 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.227 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.255 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.255 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.256 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.256 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.256 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:32:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:32:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2733328198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.701 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:32:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.917 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.918 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.918 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:32:29 np0005539482 nova_compute[254898]: 2025-11-29 05:32:29.919 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.015 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.016 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.046 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:32:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:32:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.475 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.483 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.508 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.512 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:32:30 np0005539482 nova_compute[254898]: 2025-11-29 05:32:30.513 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:32:31 np0005539482 nova_compute[254898]: 2025-11-29 05:32:31.240 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:31 np0005539482 nova_compute[254898]: 2025-11-29 05:32:31.241 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:32:31 np0005539482 nova_compute[254898]: 2025-11-29 05:32:31.241 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:32:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:32:41
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups']
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:32:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:43 np0005539482 podman[259995]: 2025-11-29 05:32:43.041031479 +0000 UTC m=+0.084298351 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:32:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:47 np0005539482 podman[260015]: 2025-11-29 05:32:47.060714036 +0000 UTC m=+0.104041138 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 00:32:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:32:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:57 np0005539482 podman[260041]: 2025-11-29 05:32:57.062371441 +0000 UTC m=+0.098727550 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:32:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:32:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:32:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:33:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:33:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:33:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:33:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:33:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:33:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:33:13.746 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:33:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:33:13.747 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:33:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:33:13.747 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:33:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:14 np0005539482 podman[260060]: 2025-11-29 05:33:14.021540638 +0000 UTC m=+0.078368342 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:33:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:33:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/207339170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:33:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:33:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/207339170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:33:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:18 np0005539482 podman[260080]: 2025-11-29 05:33:18.063456004 +0000 UTC m=+0.116919316 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 00:33:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:33:23 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 11b03f41-e8fc-409e-8cb3-7abe1bca0259 does not exist
Nov 29 00:33:23 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev bc3ea8e7-a975-4b72-ba53-81b6b665210c does not exist
Nov 29 00:33:23 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d0836cc1-173a-41c5-9ff3-cc9fd17973d3 does not exist
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:33:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:33:23 np0005539482 podman[260377]: 2025-11-29 05:33:23.727572802 +0000 UTC m=+0.052819038 container create 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 00:33:23 np0005539482 systemd[1]: Started libpod-conmon-5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6.scope.
Nov 29 00:33:23 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:33:23 np0005539482 podman[260377]: 2025-11-29 05:33:23.704580741 +0000 UTC m=+0.029826967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:33:23 np0005539482 podman[260377]: 2025-11-29 05:33:23.819453938 +0000 UTC m=+0.144700234 container init 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:33:23 np0005539482 podman[260377]: 2025-11-29 05:33:23.83125201 +0000 UTC m=+0.156498246 container start 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:33:23 np0005539482 podman[260377]: 2025-11-29 05:33:23.835095453 +0000 UTC m=+0.160341709 container attach 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:33:23 np0005539482 suspicious_neumann[260394]: 167 167
Nov 29 00:33:23 np0005539482 systemd[1]: libpod-5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6.scope: Deactivated successfully.
Nov 29 00:33:23 np0005539482 podman[260377]: 2025-11-29 05:33:23.840709948 +0000 UTC m=+0.165956224 container died 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:33:23 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d08265eb8fb6568ffea87887105f6733c0570a33388225b11aa0344fb8e1ac4a-merged.mount: Deactivated successfully.
Nov 29 00:33:23 np0005539482 podman[260377]: 2025-11-29 05:33:23.895621135 +0000 UTC m=+0.220867371 container remove 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:33:23 np0005539482 systemd[1]: libpod-conmon-5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6.scope: Deactivated successfully.
Nov 29 00:33:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:33:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:33:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:33:24 np0005539482 podman[260418]: 2025-11-29 05:33:24.130920572 +0000 UTC m=+0.070751128 container create e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 00:33:24 np0005539482 systemd[1]: Started libpod-conmon-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope.
Nov 29 00:33:24 np0005539482 podman[260418]: 2025-11-29 05:33:24.10290297 +0000 UTC m=+0.042733586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:33:24 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:33:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:24 np0005539482 podman[260418]: 2025-11-29 05:33:24.224445417 +0000 UTC m=+0.164276033 container init e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:33:24 np0005539482 podman[260418]: 2025-11-29 05:33:24.239915468 +0000 UTC m=+0.179746034 container start e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:33:24 np0005539482 podman[260418]: 2025-11-29 05:33:24.24373001 +0000 UTC m=+0.183560616 container attach e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:33:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:24 np0005539482 nova_compute[254898]: 2025-11-29 05:33:24.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:24 np0005539482 nova_compute[254898]: 2025-11-29 05:33:24.959 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 00:33:24 np0005539482 nova_compute[254898]: 2025-11-29 05:33:24.977 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 00:33:24 np0005539482 nova_compute[254898]: 2025-11-29 05:33:24.978 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:24 np0005539482 nova_compute[254898]: 2025-11-29 05:33:24.978 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 00:33:24 np0005539482 nova_compute[254898]: 2025-11-29 05:33:24.991 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:25 np0005539482 boring_ptolemy[260434]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:33:25 np0005539482 boring_ptolemy[260434]: --> relative data size: 1.0
Nov 29 00:33:25 np0005539482 boring_ptolemy[260434]: --> All data devices are unavailable
Nov 29 00:33:25 np0005539482 systemd[1]: libpod-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope: Deactivated successfully.
Nov 29 00:33:25 np0005539482 systemd[1]: libpod-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope: Consumed 1.177s CPU time.
Nov 29 00:33:25 np0005539482 podman[260418]: 2025-11-29 05:33:25.477610833 +0000 UTC m=+1.417441399 container died e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:33:25 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495-merged.mount: Deactivated successfully.
Nov 29 00:33:25 np0005539482 podman[260418]: 2025-11-29 05:33:25.5437287 +0000 UTC m=+1.483559226 container remove e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:33:25 np0005539482 systemd[1]: libpod-conmon-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope: Deactivated successfully.
Nov 29 00:33:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:26 np0005539482 nova_compute[254898]: 2025-11-29 05:33:26.002 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:26 np0005539482 podman[260617]: 2025-11-29 05:33:26.303923134 +0000 UTC m=+0.064146940 container create a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:33:26 np0005539482 systemd[1]: Started libpod-conmon-a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11.scope.
Nov 29 00:33:26 np0005539482 podman[260617]: 2025-11-29 05:33:26.278986336 +0000 UTC m=+0.039210222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:33:26 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:33:26 np0005539482 podman[260617]: 2025-11-29 05:33:26.413491584 +0000 UTC m=+0.173715390 container init a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:33:26 np0005539482 podman[260617]: 2025-11-29 05:33:26.42708734 +0000 UTC m=+0.187311186 container start a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:33:26 np0005539482 podman[260617]: 2025-11-29 05:33:26.431552998 +0000 UTC m=+0.191776794 container attach a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 00:33:26 np0005539482 hungry_banzai[260633]: 167 167
Nov 29 00:33:26 np0005539482 systemd[1]: libpod-a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11.scope: Deactivated successfully.
Nov 29 00:33:26 np0005539482 podman[260617]: 2025-11-29 05:33:26.434942859 +0000 UTC m=+0.195166705 container died a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:33:26 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3d14a72fa03860050ad7401c0eb532e12b22ca4c086bd4a04b63e87bc56635bd-merged.mount: Deactivated successfully.
Nov 29 00:33:26 np0005539482 podman[260617]: 2025-11-29 05:33:26.479028027 +0000 UTC m=+0.239251833 container remove a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:33:26 np0005539482 systemd[1]: libpod-conmon-a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11.scope: Deactivated successfully.
Nov 29 00:33:26 np0005539482 podman[260656]: 2025-11-29 05:33:26.726491985 +0000 UTC m=+0.076863945 container create 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:33:26 np0005539482 systemd[1]: Started libpod-conmon-09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d.scope.
Nov 29 00:33:26 np0005539482 podman[260656]: 2025-11-29 05:33:26.695982943 +0000 UTC m=+0.046354963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:33:26 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:33:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:26 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:26 np0005539482 podman[260656]: 2025-11-29 05:33:26.837863789 +0000 UTC m=+0.188235799 container init 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:33:26 np0005539482 podman[260656]: 2025-11-29 05:33:26.851007314 +0000 UTC m=+0.201379274 container start 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:33:26 np0005539482 podman[260656]: 2025-11-29 05:33:26.855780849 +0000 UTC m=+0.206152869 container attach 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]: {
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:    "0": [
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:        {
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "devices": [
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "/dev/loop3"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            ],
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_name": "ceph_lv0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_size": "21470642176",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "name": "ceph_lv0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "tags": {
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cluster_name": "ceph",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.crush_device_class": "",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.encrypted": "0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osd_id": "0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.type": "block",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.vdo": "0"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            },
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "type": "block",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "vg_name": "ceph_vg0"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:        }
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:    ],
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:    "1": [
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:        {
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "devices": [
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "/dev/loop4"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            ],
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_name": "ceph_lv1",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_size": "21470642176",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "name": "ceph_lv1",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "tags": {
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cluster_name": "ceph",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.crush_device_class": "",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.encrypted": "0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osd_id": "1",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.type": "block",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.vdo": "0"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            },
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "type": "block",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "vg_name": "ceph_vg1"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:        }
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:    ],
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:    "2": [
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:        {
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "devices": [
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "/dev/loop5"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            ],
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_name": "ceph_lv2",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_size": "21470642176",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "name": "ceph_lv2",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "tags": {
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.cluster_name": "ceph",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.crush_device_class": "",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.encrypted": "0",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osd_id": "2",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.type": "block",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:                "ceph.vdo": "0"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            },
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "type": "block",
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:            "vg_name": "ceph_vg2"
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:        }
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]:    ]
Nov 29 00:33:27 np0005539482 quirky_davinci[260675]: }
Nov 29 00:33:27 np0005539482 systemd[1]: libpod-09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d.scope: Deactivated successfully.
Nov 29 00:33:27 np0005539482 podman[260684]: 2025-11-29 05:33:27.735637195 +0000 UTC m=+0.023397493 container died 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:33:27 np0005539482 systemd[1]: var-lib-containers-storage-overlay-94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2-merged.mount: Deactivated successfully.
Nov 29 00:33:27 np0005539482 podman[260684]: 2025-11-29 05:33:27.783811461 +0000 UTC m=+0.071571749 container remove 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:33:27 np0005539482 systemd[1]: libpod-conmon-09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d.scope: Deactivated successfully.
Nov 29 00:33:27 np0005539482 podman[260685]: 2025-11-29 05:33:27.829201221 +0000 UTC m=+0.084706255 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 00:33:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:27 np0005539482 nova_compute[254898]: 2025-11-29 05:33:27.951 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:28 np0005539482 podman[260858]: 2025-11-29 05:33:28.521043065 +0000 UTC m=+0.052537913 container create f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:33:28 np0005539482 systemd[1]: Started libpod-conmon-f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776.scope.
Nov 29 00:33:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:33:28 np0005539482 podman[260858]: 2025-11-29 05:33:28.585220375 +0000 UTC m=+0.116715223 container init f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:33:28 np0005539482 podman[260858]: 2025-11-29 05:33:28.493414151 +0000 UTC m=+0.024909019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:33:28 np0005539482 podman[260858]: 2025-11-29 05:33:28.591108076 +0000 UTC m=+0.122602894 container start f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:33:28 np0005539482 podman[260858]: 2025-11-29 05:33:28.594061087 +0000 UTC m=+0.125555915 container attach f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:33:28 np0005539482 brave_meninsky[260874]: 167 167
Nov 29 00:33:28 np0005539482 systemd[1]: libpod-f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776.scope: Deactivated successfully.
Nov 29 00:33:28 np0005539482 podman[260858]: 2025-11-29 05:33:28.595335878 +0000 UTC m=+0.126830716 container died f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:33:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bd3c6895d533787623bbd2b2d1c187b1c478110fbe02fbc2b55e6a4468fd5c34-merged.mount: Deactivated successfully.
Nov 29 00:33:28 np0005539482 podman[260858]: 2025-11-29 05:33:28.630547452 +0000 UTC m=+0.162042270 container remove f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:33:28 np0005539482 systemd[1]: libpod-conmon-f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776.scope: Deactivated successfully.
Nov 29 00:33:28 np0005539482 podman[260896]: 2025-11-29 05:33:28.773121474 +0000 UTC m=+0.036226660 container create 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:33:28 np0005539482 systemd[1]: Started libpod-conmon-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope.
Nov 29 00:33:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:33:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:28 np0005539482 podman[260896]: 2025-11-29 05:33:28.756402673 +0000 UTC m=+0.019507899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:33:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:33:28 np0005539482 podman[260896]: 2025-11-29 05:33:28.865668265 +0000 UTC m=+0.128773501 container init 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:33:28 np0005539482 podman[260896]: 2025-11-29 05:33:28.873488554 +0000 UTC m=+0.136593750 container start 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:33:28 np0005539482 podman[260896]: 2025-11-29 05:33:28.876633149 +0000 UTC m=+0.139738355 container attach 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:33:28 np0005539482 nova_compute[254898]: 2025-11-29 05:33:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:28 np0005539482 nova_compute[254898]: 2025-11-29 05:33:28.955 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:28 np0005539482 nova_compute[254898]: 2025-11-29 05:33:28.956 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]: {
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "osd_id": 0,
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "type": "bluestore"
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:    },
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "osd_id": 1,
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "type": "bluestore"
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:    },
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "osd_id": 2,
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:        "type": "bluestore"
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]:    }
Nov 29 00:33:29 np0005539482 thirsty_varahamihira[260912]: }
Nov 29 00:33:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:29 np0005539482 systemd[1]: libpod-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope: Deactivated successfully.
Nov 29 00:33:29 np0005539482 podman[260896]: 2025-11-29 05:33:29.909909277 +0000 UTC m=+1.173014463 container died 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:33:29 np0005539482 systemd[1]: libpod-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope: Consumed 1.042s CPU time.
Nov 29 00:33:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858-merged.mount: Deactivated successfully.
Nov 29 00:33:29 np0005539482 nova_compute[254898]: 2025-11-29 05:33:29.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:29 np0005539482 nova_compute[254898]: 2025-11-29 05:33:29.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:33:29 np0005539482 nova_compute[254898]: 2025-11-29 05:33:29.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:33:29 np0005539482 podman[260896]: 2025-11-29 05:33:29.97003261 +0000 UTC m=+1.233137796 container remove 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:33:29 np0005539482 systemd[1]: libpod-conmon-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope: Deactivated successfully.
Nov 29 00:33:29 np0005539482 nova_compute[254898]: 2025-11-29 05:33:29.984 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:33:29 np0005539482 nova_compute[254898]: 2025-11-29 05:33:29.985 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:29 np0005539482 nova_compute[254898]: 2025-11-29 05:33:29.985 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:33:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:33:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.016 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:33:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:33:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ac24e11f-9ee7-4ed3-b713-fc9b40f11d8b does not exist
Nov 29 00:33:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev b518b32f-4a19-4b8d-8d9a-246606b06a74 does not exist
Nov 29 00:33:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:33:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654780220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.432 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.675 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.678 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5150MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.678 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.679 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.941 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:33:30 np0005539482 nova_compute[254898]: 2025-11-29 05:33:30.941 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:33:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:33:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.103 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.238 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.239 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.274 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.311 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.336 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:33:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:33:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032297455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.852 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.859 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.883 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.885 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:33:31 np0005539482 nova_compute[254898]: 2025-11-29 05:33:31.886 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:33:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:32 np0005539482 nova_compute[254898]: 2025-11-29 05:33:32.855 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:33:32 np0005539482 nova_compute[254898]: 2025-11-29 05:33:32.855 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:33:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:33:41
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.control']
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:33:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:45 np0005539482 podman[261058]: 2025-11-29 05:33:45.031185155 +0000 UTC m=+0.080264458 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 00:33:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:47 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:49 np0005539482 podman[261079]: 2025-11-29 05:33:49.099024971 +0000 UTC m=+0.139848897 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 00:33:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:49 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:33:51 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:53 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:54 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:55 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:33:58 np0005539482 podman[261107]: 2025-11-29 05:33:58.002170376 +0000 UTC m=+0.057551383 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 00:33:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:33:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.016178) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441016249, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3471571, "memory_usage": 3515200, "flush_reason": "Manual Compaction"}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441048172, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3406772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16324, "largest_seqno": 18376, "table_properties": {"data_size": 3397419, "index_size": 5911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18498, "raw_average_key_size": 19, "raw_value_size": 3378759, "raw_average_value_size": 3625, "num_data_blocks": 268, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394208, "oldest_key_time": 1764394208, "file_creation_time": 1764394441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 32130 microseconds, and 14951 cpu microseconds.
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.048307) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3406772 bytes OK
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.048347) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.050403) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.050434) EVENT_LOG_v1 {"time_micros": 1764394441050423, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.050480) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3462985, prev total WAL file size 3462985, number of live WAL files 2.
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.052101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3326KB)], [38(7512KB)]
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441052191, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11099450, "oldest_snapshot_seqno": -1}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4410 keys, 9346152 bytes, temperature: kUnknown
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441148620, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9346152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9312961, "index_size": 21049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106563, "raw_average_key_size": 24, "raw_value_size": 9229618, "raw_average_value_size": 2092, "num_data_blocks": 894, "num_entries": 4410, "num_filter_entries": 4410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.148830) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9346152 bytes
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.150532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.0 rd, 96.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4928, records dropped: 518 output_compression: NoCompression
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.150556) EVENT_LOG_v1 {"time_micros": 1764394441150541, "job": 18, "event": "compaction_finished", "compaction_time_micros": 96488, "compaction_time_cpu_micros": 41477, "output_level": 6, "num_output_files": 1, "total_output_size": 9346152, "num_input_records": 4928, "num_output_records": 4410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441151317, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441152688, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.051962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:01 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:34:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 00:34:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 00:34:02 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 00:34:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 00:34:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 00:34:03 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 00:34:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:34:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:34:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 00:34:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 00:34:05 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 00:34:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 25 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 5.1 MiB/s wr, 63 op/s
Nov 29 00:34:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 25 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 4.2 MiB/s wr, 52 op/s
Nov 29 00:34:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:34:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 00:34:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 00:34:09 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 00:34:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.9 MiB/s wr, 54 op/s
Nov 29 00:34:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:34:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:34:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:34:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:34:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:34:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:34:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 29 00:34:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:34:13.748 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:34:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:34:13.749 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:34:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:34:13.749 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:34:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.8 MiB/s wr, 7 op/s
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/987643512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/987643512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.919493) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454919542, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 408, "num_deletes": 250, "total_data_size": 271339, "memory_usage": 279968, "flush_reason": "Manual Compaction"}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454925052, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 255149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18377, "largest_seqno": 18784, "table_properties": {"data_size": 252686, "index_size": 563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6229, "raw_average_key_size": 19, "raw_value_size": 247774, "raw_average_value_size": 781, "num_data_blocks": 25, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394442, "oldest_key_time": 1764394442, "file_creation_time": 1764394454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5611 microseconds, and 2713 cpu microseconds.
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.925105) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 255149 bytes OK
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.925128) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.926949) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.926972) EVENT_LOG_v1 {"time_micros": 1764394454926964, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.926993) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 268761, prev total WAL file size 268761, number of live WAL files 2.
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.927523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(249KB)], [41(9127KB)]
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454927575, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9601301, "oldest_snapshot_seqno": -1}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4217 keys, 6323138 bytes, temperature: kUnknown
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454980050, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6323138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6295642, "index_size": 15867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 103018, "raw_average_key_size": 24, "raw_value_size": 6219961, "raw_average_value_size": 1474, "num_data_blocks": 668, "num_entries": 4217, "num_filter_entries": 4217, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.980397) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6323138 bytes
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.981961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.6 rd, 120.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.9 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(62.4) write-amplify(24.8) OK, records in: 4727, records dropped: 510 output_compression: NoCompression
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.981993) EVENT_LOG_v1 {"time_micros": 1764394454981977, "job": 20, "event": "compaction_finished", "compaction_time_micros": 52583, "compaction_time_cpu_micros": 30098, "output_level": 6, "num_output_files": 1, "total_output_size": 6323138, "num_input_records": 4727, "num_output_records": 4217, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454982250, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454985502, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.927476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:14 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:34:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Nov 29 00:34:16 np0005539482 podman[261126]: 2025-11-29 05:34:16.016123878 +0000 UTC m=+0.069923749 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 00:34:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Nov 29 00:34:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:34:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 408 B/s rd, 102 B/s wr, 0 op/s
Nov 29 00:34:20 np0005539482 podman[261147]: 2025-11-29 05:34:20.037092909 +0000 UTC m=+0.090973624 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 00:34:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 00:34:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "format": "json"}]: dispatch
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 00:34:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:34:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:34:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:34:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:34:25 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.csskcz(active, since 25m)
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406/.meta.tmp'
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406/.meta.tmp' to config b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406/.meta'
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "format": "json"}]: dispatch
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 00:34:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:34:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:34:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s wr, 0 op/s
Nov 29 00:34:26 np0005539482 nova_compute[254898]: 2025-11-29 05:34:26.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:34:27 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:34:27.492 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:34:27 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:34:27.493 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640/.meta.tmp'
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640/.meta.tmp' to config b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640/.meta'
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "format": "json"}]: dispatch
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:34:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:34:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s wr, 0 op/s
Nov 29 00:34:28 np0005539482 nova_compute[254898]: 2025-11-29 05:34:28.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:34:28 np0005539482 nova_compute[254898]: 2025-11-29 05:34:28.983 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:34:28 np0005539482 nova_compute[254898]: 2025-11-29 05:34:28.984 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:34:29 np0005539482 podman[261187]: 2025-11-29 05:34:29.010818996 +0000 UTC m=+0.060895082 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 00:34:29 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:34:29.495 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:34:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 00:34:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.984 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.984 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.985 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.985 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:34:29 np0005539482 nova_compute[254898]: 2025-11-29 05:34:29.985 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/666548501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.450 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.609 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.610 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5179MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.610 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.610 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.696 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.697 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:34:30 np0005539482 nova_compute[254898]: 2025-11-29 05:34:30.723 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:34:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7916a9c3-dca1-42d8-aabf-32a9a58e0024 does not exist
Nov 29 00:34:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f85a386d-2c5f-429a-8bc3-2f942349a6c9 does not exist
Nov 29 00:34:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e9f35225-59a4-4afc-a4fb-03b78f89acbe does not exist
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:34:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:34:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 00:34:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:34:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:34:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:34:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:34:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257543193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:34:31 np0005539482 nova_compute[254898]: 2025-11-29 05:34:31.222 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:34:31 np0005539482 nova_compute[254898]: 2025-11-29 05:34:31.226 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:34:31 np0005539482 nova_compute[254898]: 2025-11-29 05:34:31.241 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:34:31 np0005539482 nova_compute[254898]: 2025-11-29 05:34:31.242 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:34:31 np0005539482 nova_compute[254898]: 2025-11-29 05:34:31.243 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:34:31 np0005539482 podman[261522]: 2025-11-29 05:34:31.428576161 +0000 UTC m=+0.053012302 container create 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 00:34:31 np0005539482 systemd[1]: Started libpod-conmon-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope.
Nov 29 00:34:31 np0005539482 podman[261522]: 2025-11-29 05:34:31.399039122 +0000 UTC m=+0.023475343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:34:31 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:34:31 np0005539482 podman[261522]: 2025-11-29 05:34:31.520824165 +0000 UTC m=+0.145260346 container init 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 00:34:31 np0005539482 podman[261522]: 2025-11-29 05:34:31.535315194 +0000 UTC m=+0.159751335 container start 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:34:31 np0005539482 podman[261522]: 2025-11-29 05:34:31.539256667 +0000 UTC m=+0.163692828 container attach 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:34:31 np0005539482 eloquent_mirzakhani[261538]: 167 167
Nov 29 00:34:31 np0005539482 systemd[1]: libpod-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope: Deactivated successfully.
Nov 29 00:34:31 np0005539482 conmon[261538]: conmon 85b974d9c2fa292f670a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope/container/memory.events
Nov 29 00:34:31 np0005539482 podman[261522]: 2025-11-29 05:34:31.542083625 +0000 UTC m=+0.166519766 container died 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "format": "json"}]: dispatch
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9e5093fadb1c1d305f81538b30b4c06cfd218029f847f491512af115e9280082-merged.mount: Deactivated successfully.
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.571+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86dc64fd-e983-41fb-88c2-0ca9782c4406' of type subvolume
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86dc64fd-e983-41fb-88c2-0ca9782c4406' of type subvolume
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "force": true, "format": "json"}]: dispatch
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406'' moved to trashcan
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 podman[261522]: 2025-11-29 05:34:31.592146477 +0000 UTC m=+0.216582608 container remove 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:34:31 np0005539482 systemd[1]: libpod-conmon-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope: Deactivated successfully.
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:34:31 np0005539482 podman[261585]: 2025-11-29 05:34:31.797992357 +0000 UTC m=+0.057847139 container create 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 00:34:31 np0005539482 systemd[1]: Started libpod-conmon-900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9.scope.
Nov 29 00:34:31 np0005539482 podman[261585]: 2025-11-29 05:34:31.774016772 +0000 UTC m=+0.033871544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:34:31 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:34:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:34:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:34:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:34:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:34:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "format": "json"}]: dispatch
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:139756d1-c4a7-4d9e-860e-88e58c898640, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:139756d1-c4a7-4d9e-860e-88e58c898640, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '139756d1-c4a7-4d9e-860e-88e58c898640' of type subvolume
Nov 29 00:34:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.907+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '139756d1-c4a7-4d9e-860e-88e58c898640' of type subvolume
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "force": true, "format": "json"}]: dispatch
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640'' moved to trashcan
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 00:34:31 np0005539482 podman[261585]: 2025-11-29 05:34:31.922024874 +0000 UTC m=+0.181879636 container init 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:34:31 np0005539482 podman[261585]: 2025-11-29 05:34:31.936135383 +0000 UTC m=+0.195990135 container start 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:34:31 np0005539482 podman[261585]: 2025-11-29 05:34:31.940876506 +0000 UTC m=+0.200731258 container attach 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 00:34:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 00:35:57 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 4 op/s
Nov 29 00:35:58 np0005539482 rsyslogd[1003]: imjournal: 1309 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "format": "json"}]: dispatch
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:35:58 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:58.829+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e332c7f-e0d3-46ad-9a13-7cf0840fc484' of type subvolume
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e332c7f-e0d3-46ad-9a13-7cf0840fc484' of type subvolume
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "force": true, "format": "json"}]: dispatch
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0e332c7f-e0d3-46ad-9a13-7cf0840fc484'' moved to trashcan
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:35:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce/.meta.tmp'
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce/.meta.tmp' to config b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce/.meta'
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "format": "json"}]: dispatch
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:35:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692_f6acc0be-448f-4c63-b32e-428b3a708389", "force": true, "format": "json"}]: dispatch
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692_f6acc0be-448f-4c63-b32e-428b3a708389, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp'
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp' to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta'
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692_f6acc0be-448f-4c63-b32e-428b3a708389, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692", "force": true, "format": "json"}]: dispatch
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp'
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp' to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta'
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 00:35:59 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Nov 29 00:36:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:01 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Nov 29 00:36:02 np0005539482 podman[263292]: 2025-11-29 05:36:02.056343049 +0000 UTC m=+0.092423506 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2/.meta.tmp'
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2/.meta.tmp' to config b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2/.meta'
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "format": "json"}]: dispatch
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "format": "json"}]: dispatch
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:02.910+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c8cd1abe-2662-4481-9c2f-01f70ea291ce' of type subvolume
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c8cd1abe-2662-4481-9c2f-01f70ea291ce' of type subvolume
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce'' moved to trashcan
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "format": "json"}]: dispatch
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:02.959+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09990eae-c6d2-4985-ad1a-d7539b5b0a71' of type subvolume
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09990eae-c6d2-4985-ad1a-d7539b5b0a71' of type subvolume
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71'' moved to trashcan
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 00:36:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 00:36:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 00:36:03 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 00:36:03 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Nov 29 00:36:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:05 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 49 KiB/s wr, 5 op/s
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "format": "json"}]: dispatch
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb79334f-5107-432b-91aa-57c8d02f46a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb79334f-5107-432b-91aa-57c8d02f46a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:06 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:06.112+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb79334f-5107-432b-91aa-57c8d02f46a2' of type subvolume
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb79334f-5107-432b-91aa-57c8d02f46a2' of type subvolume
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2'' moved to trashcan
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 00:36:07 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 49 KiB/s wr, 5 op/s
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef/.meta.tmp'
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef/.meta.tmp' to config b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef/.meta'
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "format": "json"}]: dispatch
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 00:36:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:09 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 00:36:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 00:36:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 00:36:10 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 00:36:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:36:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:36:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:36:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:36:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:36:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:36:11 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 689 B/s rd, 50 KiB/s wr, 6 op/s
Nov 29 00:36:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:36:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:36:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:36:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:36:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:36:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:36:13 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 00:36:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:36:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236799450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:36:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:36:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236799450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "format": "json"}]: dispatch
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:14 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:14.840+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '969d69e3-5179-4284-9d56-4ddf6b5b95ef' of type subvolume
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '969d69e3-5179-4284-9d56-4ddf6b5b95ef' of type subvolume
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef'' moved to trashcan
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 00:36:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:15 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 00:36:17 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 00:36:19 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 2 op/s
Nov 29 00:36:20 np0005539482 podman[263315]: 2025-11-29 05:36:20.028491002 +0000 UTC m=+0.078114151 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 00:36:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943/.meta.tmp'
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943/.meta.tmp' to config b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943/.meta'
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "format": "json"}]: dispatch
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 00:36:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 00:36:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:21 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 174 B/s rd, 16 KiB/s wr, 2 op/s
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "format": "json"}]: dispatch
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:23.322+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f9bf0ea-9f71-4161-881f-1c5e81eea943' of type subvolume
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f9bf0ea-9f71-4161-881f-1c5e81eea943' of type subvolume
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943'' moved to trashcan
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455/.meta.tmp'
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455/.meta.tmp' to config b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455/.meta'
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "format": "json"}]: dispatch
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 00:36:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:23 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 16 KiB/s wr, 2 op/s
Nov 29 00:36:24 np0005539482 podman[263335]: 2025-11-29 05:36:24.061151203 +0000 UTC m=+0.108669189 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 00:36:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:25 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 3 op/s
Nov 29 00:36:26 np0005539482 nova_compute[254898]: 2025-11-29 05:36:26.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp'
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp' to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta'
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "format": "json"}]: dispatch
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:27 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 28 KiB/s wr, 2 op/s
Nov 29 00:36:28 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:36:28.261 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:36:28 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:36:28.262 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:36:28 np0005539482 nova_compute[254898]: 2025-11-29 05:36:28.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:28 np0005539482 nova_compute[254898]: 2025-11-29 05:36:28.965 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48601f02-9051-4603-a049-8748d3e87534", "format": "json"}]: dispatch
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:48601f02-9051-4603-a049-8748d3e87534, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:48601f02-9051-4603-a049-8748d3e87534, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48601f02-9051-4603-a049-8748d3e87534' of type subvolume
Nov 29 00:36:29 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:29.316+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48601f02-9051-4603-a049-8748d3e87534' of type subvolume
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48601f02-9051-4603-a049-8748d3e87534", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/48601f02-9051-4603-a049-8748d3e87534'' moved to trashcan
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 00:36:29 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 43 KiB/s wr, 4 op/s
Nov 29 00:36:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:30 np0005539482 nova_compute[254898]: 2025-11-29 05:36:30.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:31 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:36:31.264 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:36:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc", "format": "json"}]: dispatch
Nov 29 00:36:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.988 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:36:31 np0005539482 nova_compute[254898]: 2025-11-29 05:36:31.988 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:36:31 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Nov 29 00:36:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:36:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/558752570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:36:32 np0005539482 nova_compute[254898]: 2025-11-29 05:36:32.439 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:36:32 np0005539482 nova_compute[254898]: 2025-11-29 05:36:32.595 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:36:32 np0005539482 nova_compute[254898]: 2025-11-29 05:36:32.597 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:36:32 np0005539482 nova_compute[254898]: 2025-11-29 05:36:32.597 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:36:32 np0005539482 nova_compute[254898]: 2025-11-29 05:36:32.597 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.024 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.025 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:36:33 np0005539482 podman[263383]: 2025-11-29 05:36:33.039132753 +0000 UTC m=+0.081621845 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.052 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:36:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:36:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3442484796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.482 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.487 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.506 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.507 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:36:33 np0005539482 nova_compute[254898]: 2025-11-29 05:36:33.508 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:36:33 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Nov 29 00:36:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:35 np0005539482 nova_compute[254898]: 2025-11-29 05:36:35.503 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:35 np0005539482 nova_compute[254898]: 2025-11-29 05:36:35.504 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:35 np0005539482 nova_compute[254898]: 2025-11-29 05:36:35.505 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:36:35 np0005539482 nova_compute[254898]: 2025-11-29 05:36:35.505 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:36:35 np0005539482 nova_compute[254898]: 2025-11-29 05:36:35.527 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:36:35 np0005539482 nova_compute[254898]: 2025-11-29 05:36:35.527 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp'
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp' to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta'
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "format": "json"}]: dispatch
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:35 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 39 KiB/s wr, 5 op/s
Nov 29 00:36:37 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 00:36:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363", "format": "json"}]: dispatch
Nov 29 00:36:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:39 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 33 KiB/s wr, 4 op/s
Nov 29 00:36:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:36:41
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups']
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f9c22880>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f9c22730>)]
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:36:41 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Nov 29 00:36:43 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.csskcz(active, since 28m)
Nov 29 00:36:43 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363_ee139701-4c1f-4617-b59a-7c6029149324", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363_ee139701-4c1f-4617-b59a-7c6029149324, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp'
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp' to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta'
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363_ee139701-4c1f-4617-b59a-7c6029149324, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp'
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp' to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta'
Nov 29 00:36:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:36:45 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d753479f-197f-461a-8f0e-4387ea667c0c does not exist
Nov 29 00:36:45 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e00f2512-4242-410f-90fb-9cc597465dae does not exist
Nov 29 00:36:45 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a48fe57b-3d18-40c8-93fe-f8be926f6627 does not exist
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:36:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:36:45 np0005539482 podman[263700]: 2025-11-29 05:36:45.93910617 +0000 UTC m=+0.063311852 container create d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:36:45 np0005539482 systemd[1]: Started libpod-conmon-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope.
Nov 29 00:36:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 00:36:45 np0005539482 podman[263700]: 2025-11-29 05:36:45.906039099 +0000 UTC m=+0.030244761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:36:45 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 00:36:46 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:36:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0/.meta.tmp'
Nov 29 00:36:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0/.meta.tmp' to config b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0/.meta'
Nov 29 00:36:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 00:36:46 np0005539482 podman[263700]: 2025-11-29 05:36:46.016607074 +0000 UTC m=+0.140812656 container init d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:36:46 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "format": "json"}]: dispatch
Nov 29 00:36:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 00:36:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 00:36:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:46 np0005539482 podman[263700]: 2025-11-29 05:36:46.025520989 +0000 UTC m=+0.149726551 container start d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:36:46 np0005539482 podman[263700]: 2025-11-29 05:36:46.028514372 +0000 UTC m=+0.152719944 container attach d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:36:46 np0005539482 eloquent_carson[263716]: 167 167
Nov 29 00:36:46 np0005539482 systemd[1]: libpod-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope: Deactivated successfully.
Nov 29 00:36:46 np0005539482 conmon[263716]: conmon d590007deb3a7207aa4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope/container/memory.events
Nov 29 00:36:46 np0005539482 podman[263721]: 2025-11-29 05:36:46.088337028 +0000 UTC m=+0.035354915 container died d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:36:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:36:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:36:46 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:36:46 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c4c2d3a5cc1357250fc8f19426ea2fffdd20e784d5e4572e79cdf449734d4404-merged.mount: Deactivated successfully.
Nov 29 00:36:46 np0005539482 podman[263721]: 2025-11-29 05:36:46.123703775 +0000 UTC m=+0.070721662 container remove d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:36:46 np0005539482 systemd[1]: libpod-conmon-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope: Deactivated successfully.
Nov 29 00:36:46 np0005539482 podman[263743]: 2025-11-29 05:36:46.295488309 +0000 UTC m=+0.039462815 container create a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:36:46 np0005539482 systemd[1]: Started libpod-conmon-a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee.scope.
Nov 29 00:36:46 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:36:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:46 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:46 np0005539482 podman[263743]: 2025-11-29 05:36:46.372501802 +0000 UTC m=+0.116476308 container init a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:36:46 np0005539482 podman[263743]: 2025-11-29 05:36:46.279570285 +0000 UTC m=+0.023544771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:36:46 np0005539482 podman[263743]: 2025-11-29 05:36:46.382285419 +0000 UTC m=+0.126259885 container start a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:36:46 np0005539482 podman[263743]: 2025-11-29 05:36:46.385548318 +0000 UTC m=+0.129522794 container attach a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:36:47 np0005539482 happy_bhaskara[263760]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:36:47 np0005539482 happy_bhaskara[263760]: --> relative data size: 1.0
Nov 29 00:36:47 np0005539482 happy_bhaskara[263760]: --> All data devices are unavailable
Nov 29 00:36:47 np0005539482 systemd[1]: libpod-a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee.scope: Deactivated successfully.
Nov 29 00:36:47 np0005539482 podman[263789]: 2025-11-29 05:36:47.459896554 +0000 UTC m=+0.028463029 container died a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc_a1b607a0-ec1d-4d0c-aa68-b82ad11908c2", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc_a1b607a0-ec1d-4d0c-aa68-b82ad11908c2, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp'
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp' to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta'
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc_a1b607a0-ec1d-4d0c-aa68-b82ad11908c2, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7-merged.mount: Deactivated successfully.
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp'
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp' to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta'
Nov 29 00:36:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:47 np0005539482 podman[263789]: 2025-11-29 05:36:47.530965104 +0000 UTC m=+0.099531509 container remove a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:36:47 np0005539482 systemd[1]: libpod-conmon-a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee.scope: Deactivated successfully.
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 2 op/s
Nov 29 00:36:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 00:36:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 00:36:48 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 00:36:48 np0005539482 podman[263945]: 2025-11-29 05:36:48.27954449 +0000 UTC m=+0.068492098 container create 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "740417e2-7402-40fb-a24e-d743db894fa4", "format": "json"}]: dispatch
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:740417e2-7402-40fb-a24e-d743db894fa4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:48 np0005539482 systemd[1]: Started libpod-conmon-3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52.scope.
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:740417e2-7402-40fb-a24e-d743db894fa4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:48 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:48.335+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '740417e2-7402-40fb-a24e-d743db894fa4' of type subvolume
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '740417e2-7402-40fb-a24e-d743db894fa4' of type subvolume
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:48 np0005539482 podman[263945]: 2025-11-29 05:36:48.254240968 +0000 UTC m=+0.043188656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4'' moved to trashcan
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 00:36:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:36:48 np0005539482 podman[263945]: 2025-11-29 05:36:48.381370703 +0000 UTC m=+0.170318401 container init 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:36:48 np0005539482 podman[263945]: 2025-11-29 05:36:48.391735053 +0000 UTC m=+0.180682691 container start 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 00:36:48 np0005539482 podman[263945]: 2025-11-29 05:36:48.39612536 +0000 UTC m=+0.185072998 container attach 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:36:48 np0005539482 serene_aryabhata[263961]: 167 167
Nov 29 00:36:48 np0005539482 systemd[1]: libpod-3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52.scope: Deactivated successfully.
Nov 29 00:36:48 np0005539482 podman[263945]: 2025-11-29 05:36:48.399648015 +0000 UTC m=+0.188595693 container died 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:36:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-71d5b83171d2bf1e37208ebd02c221c70c0ab24f376c4be24b6df6229dd2491c-merged.mount: Deactivated successfully.
Nov 29 00:36:48 np0005539482 podman[263945]: 2025-11-29 05:36:48.443960317 +0000 UTC m=+0.232907905 container remove 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:36:48 np0005539482 systemd[1]: libpod-conmon-3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52.scope: Deactivated successfully.
Nov 29 00:36:48 np0005539482 podman[263985]: 2025-11-29 05:36:48.653541946 +0000 UTC m=+0.059562791 container create 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 00:36:48 np0005539482 systemd[1]: Started libpod-conmon-85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016.scope.
Nov 29 00:36:48 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:36:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:48 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:48 np0005539482 podman[263985]: 2025-11-29 05:36:48.630241273 +0000 UTC m=+0.036262188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:36:48 np0005539482 podman[263985]: 2025-11-29 05:36:48.727856063 +0000 UTC m=+0.133876928 container init 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:36:48 np0005539482 podman[263985]: 2025-11-29 05:36:48.735297504 +0000 UTC m=+0.141318329 container start 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:36:48 np0005539482 podman[263985]: 2025-11-29 05:36:48.738205304 +0000 UTC m=+0.144226169 container attach 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 00:36:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 00:36:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 00:36:49 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 00:36:49 np0005539482 determined_hugle[264002]: {
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:    "0": [
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:        {
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "devices": [
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "/dev/loop3"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            ],
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_name": "ceph_lv0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_size": "21470642176",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "name": "ceph_lv0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "tags": {
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cluster_name": "ceph",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.crush_device_class": "",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.encrypted": "0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osd_id": "0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.type": "block",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.vdo": "0"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            },
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "type": "block",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "vg_name": "ceph_vg0"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:        }
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:    ],
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:    "1": [
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:        {
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "devices": [
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "/dev/loop4"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            ],
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_name": "ceph_lv1",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_size": "21470642176",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "name": "ceph_lv1",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "tags": {
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cluster_name": "ceph",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.crush_device_class": "",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.encrypted": "0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osd_id": "1",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.type": "block",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.vdo": "0"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            },
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "type": "block",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "vg_name": "ceph_vg1"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:        }
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:    ],
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:    "2": [
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:        {
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "devices": [
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "/dev/loop5"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            ],
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_name": "ceph_lv2",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_size": "21470642176",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "name": "ceph_lv2",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "tags": {
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.cluster_name": "ceph",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.crush_device_class": "",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.encrypted": "0",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osd_id": "2",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.type": "block",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:                "ceph.vdo": "0"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            },
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "type": "block",
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:            "vg_name": "ceph_vg2"
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:        }
Nov 29 00:36:49 np0005539482 determined_hugle[264002]:    ]
Nov 29 00:36:49 np0005539482 determined_hugle[264002]: }
Nov 29 00:36:49 np0005539482 systemd[1]: libpod-85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016.scope: Deactivated successfully.
Nov 29 00:36:49 np0005539482 podman[263985]: 2025-11-29 05:36:49.471699836 +0000 UTC m=+0.877720701 container died 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:36:49 np0005539482 systemd[1]: var-lib-containers-storage-overlay-400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4-merged.mount: Deactivated successfully.
Nov 29 00:36:49 np0005539482 podman[263985]: 2025-11-29 05:36:49.527540917 +0000 UTC m=+0.933561732 container remove 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:36:49 np0005539482 systemd[1]: libpod-conmon-85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016.scope: Deactivated successfully.
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 5 op/s
Nov 29 00:36:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:50 np0005539482 podman[264162]: 2025-11-29 05:36:50.284414444 +0000 UTC m=+0.044333443 container create a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:36:50 np0005539482 systemd[1]: Started libpod-conmon-a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7.scope.
Nov 29 00:36:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:36:50 np0005539482 podman[264162]: 2025-11-29 05:36:50.352477301 +0000 UTC m=+0.112396290 container init a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:36:50 np0005539482 podman[264162]: 2025-11-29 05:36:50.259249805 +0000 UTC m=+0.019168794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:36:50 np0005539482 podman[264162]: 2025-11-29 05:36:50.365621508 +0000 UTC m=+0.125540527 container start a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 00:36:50 np0005539482 keen_williamson[264180]: 167 167
Nov 29 00:36:50 np0005539482 systemd[1]: libpod-a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7.scope: Deactivated successfully.
Nov 29 00:36:50 np0005539482 podman[264162]: 2025-11-29 05:36:50.37061978 +0000 UTC m=+0.130538799 container attach a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 00:36:50 np0005539482 podman[264162]: 2025-11-29 05:36:50.370968698 +0000 UTC m=+0.130887677 container died a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:36:50 np0005539482 systemd[1]: var-lib-containers-storage-overlay-aaebebc1b7f41e3b248145c45b13ac5ec1be0522060618289bd777af84bacefd-merged.mount: Deactivated successfully.
Nov 29 00:36:50 np0005539482 podman[264162]: 2025-11-29 05:36:50.419912122 +0000 UTC m=+0.179831101 container remove a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:36:50 np0005539482 systemd[1]: libpod-conmon-a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7.scope: Deactivated successfully.
Nov 29 00:36:50 np0005539482 podman[264177]: 2025-11-29 05:36:50.434990657 +0000 UTC m=+0.098260098 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 00:36:50 np0005539482 podman[264224]: 2025-11-29 05:36:50.582048234 +0000 UTC m=+0.045467992 container create 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:36:50 np0005539482 systemd[1]: Started libpod-conmon-718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879.scope.
Nov 29 00:36:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:36:50 np0005539482 podman[264224]: 2025-11-29 05:36:50.560178034 +0000 UTC m=+0.023597872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:36:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:36:50 np0005539482 podman[264224]: 2025-11-29 05:36:50.664914078 +0000 UTC m=+0.128333846 container init 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:36:50 np0005539482 podman[264224]: 2025-11-29 05:36:50.674959271 +0000 UTC m=+0.138379029 container start 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:36:50 np0005539482 podman[264224]: 2025-11-29 05:36:50.677481572 +0000 UTC m=+0.140901330 container attach 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "format": "json"}]: dispatch
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:50 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:50.822+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '63e32269-5cd1-4b91-be8c-8e96abc0fca0' of type subvolume
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '63e32269-5cd1-4b91-be8c-8e96abc0fca0' of type subvolume
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0'' moved to trashcan
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "format": "json"}]: dispatch
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:50 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:50.996+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff968508-e63c-4125-8d0a-ffeca3c4312c' of type subvolume
Nov 29 00:36:50 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff968508-e63c-4125-8d0a-ffeca3c4312c' of type subvolume
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c'' moved to trashcan
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.513314368040633e-05 of space, bias 4.0, pg target 0.06615977241648759 quantized to 16 (current 16)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]: {
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "osd_id": 0,
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "type": "bluestore"
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:    },
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "osd_id": 1,
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "type": "bluestore"
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:    },
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "osd_id": 2,
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:        "type": "bluestore"
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]:    }
Nov 29 00:36:51 np0005539482 nostalgic_burnell[264241]: }
Nov 29 00:36:51 np0005539482 systemd[1]: libpod-718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879.scope: Deactivated successfully.
Nov 29 00:36:51 np0005539482 podman[264224]: 2025-11-29 05:36:51.609101266 +0000 UTC m=+1.072521024 container died 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:36:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay-69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af-merged.mount: Deactivated successfully.
Nov 29 00:36:51 np0005539482 podman[264224]: 2025-11-29 05:36:51.661276138 +0000 UTC m=+1.124695896 container remove 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:36:51 np0005539482 systemd[1]: libpod-conmon-718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879.scope: Deactivated successfully.
Nov 29 00:36:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:36:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:36:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:36:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev b9dc8a88-f02c-447c-9953-2027921c39f8 does not exist
Nov 29 00:36:51 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev dacc5641-1f2e-4325-b99a-03731387d314 does not exist
Nov 29 00:36:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 6 op/s
Nov 29 00:36:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:36:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:36:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 41 KiB/s wr, 5 op/s
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "format": "json"}]: dispatch
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:36:55 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:55.145+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f1f9a5-b960-4859-afd1-e8403dcbe455' of type subvolume
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f1f9a5-b960-4859-afd1-e8403dcbe455' of type subvolume
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "force": true, "format": "json"}]: dispatch
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455'' moved to trashcan
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:36:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 00:36:55 np0005539482 podman[264337]: 2025-11-29 05:36:55.21648869 +0000 UTC m=+0.248468320 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 00:36:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:36:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 29 00:36:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 00:36:55 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 29 00:36:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 74 KiB/s wr, 8 op/s
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 923 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c/.meta.tmp'
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c/.meta.tmp' to config b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c/.meta'
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "format": "json"}]: dispatch
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 00:36:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 00:36:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:36:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.185166) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619185203, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1626, "num_deletes": 255, "total_data_size": 2400312, "memory_usage": 2452144, "flush_reason": "Manual Compaction"}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619199418, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2364297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19517, "largest_seqno": 21142, "table_properties": {"data_size": 2356443, "index_size": 4604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17820, "raw_average_key_size": 21, "raw_value_size": 2340197, "raw_average_value_size": 2762, "num_data_blocks": 205, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394500, "oldest_key_time": 1764394500, "file_creation_time": 1764394619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14318 microseconds, and 7215 cpu microseconds.
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.199482) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2364297 bytes OK
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.199508) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.201618) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.201648) EVENT_LOG_v1 {"time_micros": 1764394619201638, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.201675) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2392911, prev total WAL file size 2392911, number of live WAL files 2.
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.202946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2308KB)], [47(6964KB)]
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619203075, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9496071, "oldest_snapshot_seqno": -1}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4483 keys, 7742953 bytes, temperature: kUnknown
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619259053, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7742953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711812, "index_size": 18807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 111181, "raw_average_key_size": 24, "raw_value_size": 7629693, "raw_average_value_size": 1701, "num_data_blocks": 784, "num_entries": 4483, "num_filter_entries": 4483, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.259415) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7742953 bytes
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.261841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 138.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 5008, records dropped: 525 output_compression: NoCompression
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.261869) EVENT_LOG_v1 {"time_micros": 1764394619261856, "job": 24, "event": "compaction_finished", "compaction_time_micros": 56064, "compaction_time_cpu_micros": 28828, "output_level": 6, "num_output_files": 1, "total_output_size": 7742953, "num_input_records": 5008, "num_output_records": 4483, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619262802, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619264974, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.202781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:36:59 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:37:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 5 op/s
Nov 29 00:37:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 4 op/s
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541/.meta.tmp'
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541/.meta.tmp' to config b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541/.meta'
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "format": "json"}]: dispatch
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 00:37:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 00:37:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:37:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:37:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 4 op/s
Nov 29 00:37:04 np0005539482 podman[264365]: 2025-11-29 05:37:04.058909733 +0000 UTC m=+0.098011311 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 00:37:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 28 KiB/s wr, 3 op/s
Nov 29 00:37:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 25 KiB/s wr, 2 op/s
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "25c968fa-209f-495f-aace-23679fada541", "format": "json"}]: dispatch
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:25c968fa-209f-495f-aace-23679fada541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:25c968fa-209f-495f-aace-23679fada541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:09 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:09.099+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25c968fa-209f-495f-aace-23679fada541' of type subvolume
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25c968fa-209f-495f-aace-23679fada541' of type subvolume
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541'' moved to trashcan
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:37:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 00:37:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Nov 29 00:37:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:37:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:37:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:37:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:37:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:37:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:37:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 2 op/s
Nov 29 00:37:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:37:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:37:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:37:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:37:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:37:13.752 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:37:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 1 op/s
Nov 29 00:37:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:37:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226965972' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:37:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:37:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226965972' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:37:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 3 op/s
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "format": "json"}]: dispatch
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:37:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 1 op/s
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "format": "json"}]: dispatch
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:18 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:18.528+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c' of type subvolume
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c' of type subvolume
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c'' moved to trashcan
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:37:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/.meta.tmp'
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/.meta.tmp' to config b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/.meta'
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "format": "json"}]: dispatch
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:37:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:37:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 3 op/s
Nov 29 00:37:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:21 np0005539482 podman[264384]: 2025-11-29 05:37:21.0125542 +0000 UTC m=+0.064415339 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 00:37:21 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d", "format": "json"}]: dispatch
Nov 29 00:37:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 00:37:22 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6", "format": "json"}]: dispatch
Nov 29 00:37:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:37:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:37:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:37:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:37:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 00:37:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 41 KiB/s wr, 5 op/s
Nov 29 00:37:26 np0005539482 podman[264406]: 2025-11-29 05:37:26.051914264 +0000 UTC m=+0.093310218 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:37:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:37:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:37:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:37:26 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6_052f518b-49ad-41ce-af4d-007b0f475cae", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6_052f518b-49ad-41ce-af4d-007b0f475cae, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6_052f518b-49ad-41ce-af4d-007b0f475cae, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 00:37:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:37:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:37:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 4 op/s
Nov 29 00:37:28 np0005539482 nova_compute[254898]: 2025-11-29 05:37:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 55 KiB/s wr, 6 op/s
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:30 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d_348a3764-7ba4-4077-85c8-2f2a979915c1", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d_348a3764-7ba4-4077-85c8-2f2a979915c1, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d_348a3764-7ba4-4077-85c8-2f2a979915c1, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:30 np0005539482 nova_compute[254898]: 2025-11-29 05:37:30.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:30 np0005539482 nova_compute[254898]: 2025-11-29 05:37:30.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:31 np0005539482 nova_compute[254898]: 2025-11-29 05:37:31.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Nov 29 00:37:32 np0005539482 nova_compute[254898]: 2025-11-29 05:37:32.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:32 np0005539482 nova_compute[254898]: 2025-11-29 05:37:32.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:37:33 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 00:37:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 29 00:37:33 np0005539482 nova_compute[254898]: 2025-11-29 05:37:33.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a9634de1-2230-40f8-a094-82f46777a70c", "format": "json"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a9634de1-2230-40f8-a094-82f46777a70c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:33 np0005539482 nova_compute[254898]: 2025-11-29 05:37:33.980 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:37:33 np0005539482 nova_compute[254898]: 2025-11-29 05:37:33.981 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a9634de1-2230-40f8-a094-82f46777a70c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:33.980+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a9634de1-2230-40f8-a094-82f46777a70c' of type subvolume
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a9634de1-2230-40f8-a094-82f46777a70c' of type subvolume
Nov 29 00:37:33 np0005539482 nova_compute[254898]: 2025-11-29 05:37:33.981 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:37:33 np0005539482 nova_compute[254898]: 2025-11-29 05:37:33.981 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:37:33 np0005539482 nova_compute[254898]: 2025-11-29 05:37:33.982 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c'' moved to trashcan
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:37:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 00:37:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 47 KiB/s wr, 6 op/s
Nov 29 00:37:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:37:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236421173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.421 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.616 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.617 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5150MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.618 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.618 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.677 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.677 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:37:34 np0005539482 nova_compute[254898]: 2025-11-29 05:37:34.690 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:37:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 29 00:37:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 00:37:34 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 29 00:37:34 np0005539482 podman[264476]: 2025-11-29 05:37:34.99123789 +0000 UTC m=+0.045842219 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 00:37:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:37:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639970689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:37:35 np0005539482 nova_compute[254898]: 2025-11-29 05:37:35.135 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:37:35 np0005539482 nova_compute[254898]: 2025-11-29 05:37:35.140 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:37:35 np0005539482 nova_compute[254898]: 2025-11-29 05:37:35.156 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:37:35 np0005539482 nova_compute[254898]: 2025-11-29 05:37:35.157 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:37:35 np0005539482 nova_compute[254898]: 2025-11-29 05:37:35.158 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:37:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891/.meta.tmp'
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891/.meta.tmp' to config b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891/.meta'
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "format": "json"}]: dispatch
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 00:37:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 00:37:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:37:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:37:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 96 KiB/s wr, 12 op/s
Nov 29 00:37:37 np0005539482 nova_compute[254898]: 2025-11-29 05:37:37.154 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:37 np0005539482 nova_compute[254898]: 2025-11-29 05:37:37.155 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:37 np0005539482 nova_compute[254898]: 2025-11-29 05:37:37.155 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:37:37 np0005539482 nova_compute[254898]: 2025-11-29 05:37:37.155 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:37:37 np0005539482 nova_compute[254898]: 2025-11-29 05:37:37.182 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:37:37 np0005539482 nova_compute[254898]: 2025-11-29 05:37:37.182 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:37:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:37:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:37 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Nov 29 00:37:38 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:37:38.337 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:37:38 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:37:38.338 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 99 KiB/s wr, 11 op/s
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "040becac-51dc-4867-bf68-cd9d237d5891", "format": "json"}]: dispatch
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:040becac-51dc-4867-bf68-cd9d237d5891, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:040becac-51dc-4867-bf68-cd9d237d5891, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:40.317+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '040becac-51dc-4867-bf68-cd9d237d5891' of type subvolume
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '040becac-51dc-4867-bf68-cd9d237d5891' of type subvolume
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891'' moved to trashcan
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 00:37:40 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:37:40.340 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:37:40 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:37:40 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:37:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:37:41 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:37:41
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'backups', '.mgr']
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:37:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:37:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 99 KiB/s wr, 12 op/s
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7/.meta.tmp'
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7/.meta.tmp' to config b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7/.meta'
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "format": "json"}]: dispatch
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 00:37:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 00:37:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:37:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:37:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 223 B/s rd, 29 KiB/s wr, 4 op/s
Nov 29 00:37:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:37:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:37:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:44 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:37:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:37:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:37:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:37:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:37:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:37:48 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:37:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "format": "json"}]: dispatch
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:201b4694-8935-45ce-9803-6d0546c82ba7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:201b4694-8935-45ce-9803-6d0546c82ba7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:37:49 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:49.053+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '201b4694-8935-45ce-9803-6d0546c82ba7' of type subvolume
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '201b4694-8935-45ce-9803-6d0546c82ba7' of type subvolume
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "force": true, "format": "json"}]: dispatch
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7'' moved to trashcan
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:37:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 00:37:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:37:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:37:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:37:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 00:37:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 29 00:37:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 00:37:50 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.990100198606499e-05 of space, bias 4.0, pg target 0.11988120238327798 quantized to 16 (current 16)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:37:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:37:52 np0005539482 podman[264524]: 2025-11-29 05:37:52.004464351 +0000 UTC m=+0.052414630 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 00:37:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 00:37:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:37:52 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 97667023-6f45-45d8-b348-6d48ceed01fb does not exist
Nov 29 00:37:52 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 9eab6a90-e4d1-4ca1-88dc-1e0213008c48 does not exist
Nov 29 00:37:52 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev c15888b5-6fda-416f-959f-36d48f2334fe does not exist
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:37:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:37:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:37:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:37:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:37:53 np0005539482 podman[264791]: 2025-11-29 05:37:53.362670043 +0000 UTC m=+0.040614163 container create 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:37:53 np0005539482 systemd[1]: Started libpod-conmon-8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd.scope.
Nov 29 00:37:53 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:37:53 np0005539482 podman[264791]: 2025-11-29 05:37:53.431193431 +0000 UTC m=+0.109137611 container init 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:37:53 np0005539482 podman[264791]: 2025-11-29 05:37:53.437073843 +0000 UTC m=+0.115017963 container start 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:37:53 np0005539482 podman[264791]: 2025-11-29 05:37:53.344429092 +0000 UTC m=+0.022373232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:37:53 np0005539482 podman[264791]: 2025-11-29 05:37:53.440296051 +0000 UTC m=+0.118240171 container attach 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:37:53 np0005539482 crazy_heisenberg[264807]: 167 167
Nov 29 00:37:53 np0005539482 systemd[1]: libpod-8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd.scope: Deactivated successfully.
Nov 29 00:37:53 np0005539482 podman[264791]: 2025-11-29 05:37:53.441835428 +0000 UTC m=+0.119779558 container died 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 00:37:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e7e64b5d27bf0a6a0da54a347543d08bdc08ed6aad5a20396fe7fd15274f7482-merged.mount: Deactivated successfully.
Nov 29 00:37:53 np0005539482 podman[264791]: 2025-11-29 05:37:53.477431989 +0000 UTC m=+0.155376109 container remove 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 00:37:53 np0005539482 systemd[1]: libpod-conmon-8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd.scope: Deactivated successfully.
Nov 29 00:37:53 np0005539482 podman[264832]: 2025-11-29 05:37:53.630659915 +0000 UTC m=+0.046519236 container create 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:37:53 np0005539482 systemd[1]: Started libpod-conmon-627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a.scope.
Nov 29 00:37:53 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:37:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:53 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:53 np0005539482 podman[264832]: 2025-11-29 05:37:53.606433659 +0000 UTC m=+0.022293020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:37:53 np0005539482 podman[264832]: 2025-11-29 05:37:53.707654627 +0000 UTC m=+0.123513928 container init 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:37:53 np0005539482 podman[264832]: 2025-11-29 05:37:53.714251157 +0000 UTC m=+0.130110448 container start 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:37:53 np0005539482 podman[264832]: 2025-11-29 05:37:53.717615708 +0000 UTC m=+0.133475029 container attach 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:37:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 00:37:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:37:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4811 writes, 21K keys, 4811 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4811 writes, 4811 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1490 writes, 6819 keys, 1490 commit groups, 1.0 writes per commit group, ingest: 9.58 MB, 0.02 MB/s#012Interval WAL: 1490 writes, 1490 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.8      0.23              0.10        12    0.019       0      0       0.0       0.0#012  L6      1/0    7.38 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    149.4    122.4      0.63              0.30        11    0.058     48K   5786       0.0       0.0#012 Sum      1/0    7.38 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    109.2    117.7      0.87              0.40        23    0.038     48K   5786       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1    118.1    119.5      0.38              0.18        10    0.038     23K   2592       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    149.4    122.4      0.63              0.30        11    0.058     48K   5786       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.6      0.23              0.10        11    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.9 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 304.00 MB usage: 8.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(557,8.30 MB,2.73181%) FilterBlock(24,141.61 KB,0.0454903%) IndexBlock(24,266.12 KB,0.0854894%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 00:37:54 np0005539482 infallible_torvalds[264848]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:37:54 np0005539482 infallible_torvalds[264848]: --> relative data size: 1.0
Nov 29 00:37:54 np0005539482 infallible_torvalds[264848]: --> All data devices are unavailable
Nov 29 00:37:54 np0005539482 systemd[1]: libpod-627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a.scope: Deactivated successfully.
Nov 29 00:37:54 np0005539482 podman[264832]: 2025-11-29 05:37:54.669935893 +0000 UTC m=+1.085795194 container died 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:37:55 np0005539482 systemd[1]: var-lib-containers-storage-overlay-93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225-merged.mount: Deactivated successfully.
Nov 29 00:37:55 np0005539482 podman[264832]: 2025-11-29 05:37:55.194813129 +0000 UTC m=+1.610672430 container remove 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:37:55 np0005539482 systemd[1]: libpod-conmon-627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a.scope: Deactivated successfully.
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/.meta.tmp'
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/.meta.tmp' to config b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/.meta'
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "format": "json"}]: dispatch
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:37:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:37:55 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:37:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:37:55 np0005539482 podman[265029]: 2025-11-29 05:37:55.881389346 +0000 UTC m=+0.044830286 container create 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:37:55 np0005539482 systemd[1]: Started libpod-conmon-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope.
Nov 29 00:37:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:37:55 np0005539482 podman[265029]: 2025-11-29 05:37:55.859102487 +0000 UTC m=+0.022543437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:37:55 np0005539482 podman[265029]: 2025-11-29 05:37:55.958198024 +0000 UTC m=+0.121638954 container init 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:37:55 np0005539482 podman[265029]: 2025-11-29 05:37:55.969490387 +0000 UTC m=+0.132931307 container start 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:37:55 np0005539482 podman[265029]: 2025-11-29 05:37:55.972868639 +0000 UTC m=+0.136309559 container attach 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:37:55 np0005539482 confident_proskuriakova[265045]: 167 167
Nov 29 00:37:55 np0005539482 systemd[1]: libpod-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope: Deactivated successfully.
Nov 29 00:37:55 np0005539482 conmon[265045]: conmon 5d28d98700483e3e4a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope/container/memory.events
Nov 29 00:37:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 00:37:56 np0005539482 podman[265050]: 2025-11-29 05:37:56.041720624 +0000 UTC m=+0.040926410 container died 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:37:56 np0005539482 podman[265050]: 2025-11-29 05:37:56.081022815 +0000 UTC m=+0.080228581 container remove 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:37:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-04f820e928d1b7ba203adc1d50e057a912f848c5f71f2794310e2e7a55c1884d-merged.mount: Deactivated successfully.
Nov 29 00:37:56 np0005539482 systemd[1]: libpod-conmon-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope: Deactivated successfully.
Nov 29 00:37:56 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:37:56 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:37:56 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:37:56 np0005539482 podman[265065]: 2025-11-29 05:37:56.214179035 +0000 UTC m=+0.106848425 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:37:56 np0005539482 podman[265098]: 2025-11-29 05:37:56.273851789 +0000 UTC m=+0.055206817 container create 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 29 00:37:56 np0005539482 systemd[1]: Started libpod-conmon-09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0.scope.
Nov 29 00:37:56 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:37:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:56 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:56 np0005539482 podman[265098]: 2025-11-29 05:37:56.257868852 +0000 UTC m=+0.039223860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:37:56 np0005539482 podman[265098]: 2025-11-29 05:37:56.353370263 +0000 UTC m=+0.134725341 container init 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:37:56 np0005539482 podman[265098]: 2025-11-29 05:37:56.362524954 +0000 UTC m=+0.143879942 container start 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:37:56 np0005539482 podman[265098]: 2025-11-29 05:37:56.36567552 +0000 UTC m=+0.147030538 container attach 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]: {
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:    "0": [
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:        {
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "devices": [
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "/dev/loop3"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            ],
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_name": "ceph_lv0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_size": "21470642176",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "name": "ceph_lv0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "tags": {
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cluster_name": "ceph",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.crush_device_class": "",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.encrypted": "0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osd_id": "0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.type": "block",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.vdo": "0"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            },
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "type": "block",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "vg_name": "ceph_vg0"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:        }
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:    ],
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:    "1": [
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:        {
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "devices": [
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "/dev/loop4"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            ],
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_name": "ceph_lv1",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_size": "21470642176",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "name": "ceph_lv1",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "tags": {
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cluster_name": "ceph",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.crush_device_class": "",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.encrypted": "0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osd_id": "1",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.type": "block",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.vdo": "0"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            },
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "type": "block",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "vg_name": "ceph_vg1"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:        }
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:    ],
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:    "2": [
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:        {
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "devices": [
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "/dev/loop5"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            ],
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_name": "ceph_lv2",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_size": "21470642176",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "name": "ceph_lv2",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "tags": {
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.cluster_name": "ceph",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.crush_device_class": "",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.encrypted": "0",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osd_id": "2",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.type": "block",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:                "ceph.vdo": "0"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            },
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "type": "block",
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:            "vg_name": "ceph_vg2"
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:        }
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]:    ]
Nov 29 00:37:57 np0005539482 agitated_haibt[265115]: }
Nov 29 00:37:57 np0005539482 systemd[1]: libpod-09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0.scope: Deactivated successfully.
Nov 29 00:37:57 np0005539482 podman[265124]: 2025-11-29 05:37:57.172788593 +0000 UTC m=+0.037013067 container died 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:37:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9-merged.mount: Deactivated successfully.
Nov 29 00:37:57 np0005539482 podman[265124]: 2025-11-29 05:37:57.244083977 +0000 UTC m=+0.108308361 container remove 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:37:57 np0005539482 systemd[1]: libpod-conmon-09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0.scope: Deactivated successfully.
Nov 29 00:37:57 np0005539482 podman[265279]: 2025-11-29 05:37:57.884552209 +0000 UTC m=+0.044087947 container create 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 00:37:57 np0005539482 systemd[1]: Started libpod-conmon-7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58.scope.
Nov 29 00:37:57 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:37:57 np0005539482 podman[265279]: 2025-11-29 05:37:57.862814784 +0000 UTC m=+0.022350612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:37:57 np0005539482 podman[265279]: 2025-11-29 05:37:57.964690537 +0000 UTC m=+0.124226295 container init 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:37:57 np0005539482 podman[265279]: 2025-11-29 05:37:57.97101385 +0000 UTC m=+0.130549588 container start 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:37:57 np0005539482 podman[265279]: 2025-11-29 05:37:57.973737706 +0000 UTC m=+0.133273444 container attach 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:37:57 np0005539482 cool_mirzakhani[265296]: 167 167
Nov 29 00:37:57 np0005539482 systemd[1]: libpod-7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58.scope: Deactivated successfully.
Nov 29 00:37:57 np0005539482 podman[265279]: 2025-11-29 05:37:57.975699573 +0000 UTC m=+0.135235311 container died 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:37:57 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e43a2465f2d330d79a34442fb31d1b665f8f96b3ee3334fcd922340271fc0ded-merged.mount: Deactivated successfully.
Nov 29 00:37:58 np0005539482 podman[265279]: 2025-11-29 05:37:58.006464708 +0000 UTC m=+0.166000446 container remove 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:37:58 np0005539482 systemd[1]: libpod-conmon-7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58.scope: Deactivated successfully.
Nov 29 00:37:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 00:37:58 np0005539482 podman[265320]: 2025-11-29 05:37:58.180205329 +0000 UTC m=+0.037631981 container create 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 00:37:58 np0005539482 systemd[1]: Started libpod-conmon-95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c.scope.
Nov 29 00:37:58 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:37:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:58 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:37:58 np0005539482 podman[265320]: 2025-11-29 05:37:58.163246989 +0000 UTC m=+0.020673661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:37:58 np0005539482 podman[265320]: 2025-11-29 05:37:58.260763479 +0000 UTC m=+0.118190161 container init 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 00:37:58 np0005539482 podman[265320]: 2025-11-29 05:37:58.266046147 +0000 UTC m=+0.123472789 container start 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 00:37:58 np0005539482 podman[265320]: 2025-11-29 05:37:58.268781173 +0000 UTC m=+0.126207825 container attach 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:37:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:37:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 00:37:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Nov 29 00:37:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 00:37:58 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID eve49 with tenant e577c04bfe1b459f9aebd0f826827833
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 00:37:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:37:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:37:59 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:37:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:37:59 np0005539482 focused_feistel[265337]: {
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "osd_id": 0,
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "type": "bluestore"
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:    },
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "osd_id": 1,
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "type": "bluestore"
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:    },
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "osd_id": 2,
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:        "type": "bluestore"
Nov 29 00:37:59 np0005539482 focused_feistel[265337]:    }
Nov 29 00:37:59 np0005539482 focused_feistel[265337]: }
Nov 29 00:37:59 np0005539482 systemd[1]: libpod-95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c.scope: Deactivated successfully.
Nov 29 00:37:59 np0005539482 podman[265320]: 2025-11-29 05:37:59.258116203 +0000 UTC m=+1.115542855 container died 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 00:37:59 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5-merged.mount: Deactivated successfully.
Nov 29 00:37:59 np0005539482 podman[265320]: 2025-11-29 05:37:59.302521907 +0000 UTC m=+1.159948559 container remove 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:37:59 np0005539482 systemd[1]: libpod-conmon-95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c.scope: Deactivated successfully.
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:37:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:37:59 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a10508ad-79c0-41f2-9fdf-df00e4c59927 does not exist
Nov 29 00:37:59 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 8b4ea74e-bd67-4c06-afc9-dc77352aadeb does not exist
Nov 29 00:38:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 75 KiB/s wr, 9 op/s
Nov 29 00:38:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:38:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:38:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 173 B/s rd, 64 KiB/s wr, 7 op/s
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 00:38:02 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID eve48 with tenant e577c04bfe1b459f9aebd0f826827833
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:38:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:02 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 00:38:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:38:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:38:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 7 op/s
Nov 29 00:38:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 11 op/s
Nov 29 00:38:06 np0005539482 podman[265433]: 2025-11-29 05:38:06.044699597 +0000 UTC m=+0.098037902 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) v1
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0
Nov 29 00:38:06 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0],prefix=session evict} (starting...)
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84/.meta.tmp'
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84/.meta.tmp' to config b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84/.meta'
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "format": "json"}]: dispatch
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 00:38:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 00:38:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Nov 29 00:38:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Nov 29 00:38:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 8 op/s
Nov 29 00:38:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:38:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 00:38:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Nov 29 00:38:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 00:38:09 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID eve47 with tenant e577c04bfe1b459f9aebd0f826827833
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 104 KiB/s wr, 12 op/s
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:38:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:10 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 00:38:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:38:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753/.meta.tmp'
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753/.meta.tmp' to config b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753/.meta'
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "format": "json"}]: dispatch
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 00:38:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:38:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:38:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 8 op/s
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:38:13.753 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:38:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:38:13.753 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:38:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:38:13.753 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) v1
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0
Nov 29 00:38:13 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0],prefix=session evict} (starting...)
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 8 op/s
Nov 29 00:38:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:38:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1525944531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:38:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:38:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1525944531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:38:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 00:38:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Nov 29 00:38:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "format": "json"}]: dispatch
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:faeaf227-675c-42df-9bf7-248fca8b7753, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:faeaf227-675c-42df-9bf7-248fca8b7753, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:14 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:14.626+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'faeaf227-675c-42df-9bf7-248fca8b7753' of type subvolume
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'faeaf227-675c-42df-9bf7-248fca8b7753' of type subvolume
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753'' moved to trashcan
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 00:38:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 124 KiB/s wr, 15 op/s
Nov 29 00:38:16 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:38:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:38:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:38:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:17 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:38:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 96 KiB/s wr, 11 op/s
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "format": "json"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:79586ddb-9940-4101-a183-8795d6ac1e84, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:79586ddb-9940-4101-a183-8795d6ac1e84, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:18.229+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79586ddb-9940-4101-a183-8795d6ac1e84' of type subvolume
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79586ddb-9940-4101-a183-8795d6ac1e84' of type subvolume
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84'' moved to trashcan
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Nov 29 00:38:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) v1
Nov 29 00:38:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0
Nov 29 00:38:18 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0],prefix=session evict} (starting...)
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "format": "json"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0065e446-d05c-42f4-b14d-c32152b4c886, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0065e446-d05c-42f4-b14d-c32152b4c886, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:18.521+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0065e446-d05c-42f4-b14d-c32152b4c886' of type subvolume
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0065e446-d05c-42f4-b14d-c32152b4c886' of type subvolume
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886'' moved to trashcan
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 00:38:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 00:38:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Nov 29 00:38:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Nov 29 00:38:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 138 KiB/s wr, 15 op/s
Nov 29 00:38:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:20 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:38:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:38:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:20 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:21 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:21 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:21 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 00:38:22 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a00c7ebd-01d8-4358-9f97-04e4aa820623", "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 00:38:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 00:38:23 np0005539482 podman[265461]: 2025-11-29 05:38:23.054317038 +0000 UTC m=+0.090822288 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:38:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:38:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:38:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:38:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:38:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 136 KiB/s wr, 16 op/s
Nov 29 00:38:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a00c7ebd-01d8-4358-9f97-04e4aa820623", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 00:38:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 00:38:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e90efdb1-518e-4a19-a290-0fbf105b6f6d", "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 00:38:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 podman[265482]: 2025-11-29 05:38:27.046922101 +0000 UTC m=+0.099468717 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e90efdb1-518e-4a19-a290-0fbf105b6f6d", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4/.meta.tmp'
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4/.meta.tmp' to config b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4/.meta'
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "format": "json"}]: dispatch
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:38:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 73 KiB/s wr, 9 op/s
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb/.meta.tmp'
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb/.meta.tmp' to config b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb/.meta'
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "format": "json"}]: dispatch
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 00:38:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 00:38:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:28 np0005539482 nova_compute[254898]: 2025-11-29 05:38:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:28 np0005539482 nova_compute[254898]: 2025-11-29 05:38:28.969 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:28 np0005539482 nova_compute[254898]: 2025-11-29 05:38:28.969 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 00:38:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 00:38:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:30 np0005539482 nova_compute[254898]: 2025-11-29 05:38:30.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:30 np0005539482 nova_compute[254898]: 2025-11-29 05:38:30.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "format": "json"}]: dispatch
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1d9d275b-9d0b-4256-9071-300779a207f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1d9d275b-9d0b-4256-9071-300779a207f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:31 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:31.742+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9d275b-9d0b-4256-9071-300779a207f4' of type subvolume
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9d275b-9d0b-4256-9071-300779a207f4' of type subvolume
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4'' moved to trashcan
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:38:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:38:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:38:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:31 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589/.meta.tmp'
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589/.meta.tmp' to config b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589/.meta'
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 nova_compute[254898]: 2025-11-29 05:38:32.298 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "format": "json"}]: dispatch
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:38:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:38:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/.meta.tmp'
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/.meta.tmp' to config b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/.meta'
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "format": "json"}]: dispatch
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:32 np0005539482 nova_compute[254898]: 2025-11-29 05:38:32.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:32 np0005539482 nova_compute[254898]: 2025-11-29 05:38:32.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:33 np0005539482 nova_compute[254898]: 2025-11-29 05:38:33.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:33 np0005539482 nova_compute[254898]: 2025-11-29 05:38:33.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:38:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Nov 29 00:38:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:35 np0005539482 nova_compute[254898]: 2025-11-29 05:38:35.254 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:38:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:38:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:35 np0005539482 nova_compute[254898]: 2025-11-29 05:38:35.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 100 KiB/s wr, 12 op/s
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.065 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.065 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.066 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.066 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.067 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6565f80a-02f8-4a73-b996-bca74f45d589, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6565f80a-02f8-4a73-b996-bca74f45d589, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:36.323+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6565f80a-02f8-4a73-b996-bca74f45d589' of type subvolume
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6565f80a-02f8-4a73-b996-bca74f45d589' of type subvolume
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589'' moved to trashcan
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159/.meta.tmp'
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159/.meta.tmp' to config b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159/.meta'
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928314478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.494 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.658 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.659 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.660 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.660 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "tenant_id": "ae2a6e9fbea0426ebacf2fe56abb903e", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume authorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, tenant_id:ae2a6e9fbea0426ebacf2fe56abb903e, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"} v 0) v1
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-2083182201 with tenant ae2a6e9fbea0426ebacf2fe56abb903e
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume authorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, tenant_id:ae2a6e9fbea0426ebacf2fe56abb903e, vol_name:cephfs) < ""
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.976 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:38:36 np0005539482 nova_compute[254898]: 2025-11-29 05:38:36.976 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:38:36 np0005539482 podman[265532]: 2025-11-29 05:38:36.985019475 +0000 UTC m=+0.042255353 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.145 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.214 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.214 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.231 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.249 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.262 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907328920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume deauthorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.675 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.681 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.695 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.697 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.697 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.698 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.698 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"} v 0) v1
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"} v 0) v1
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]: dispatch
Nov 29 00:38:37 np0005539482 nova_compute[254898]: 2025-11-29 05:38:37.710 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 00:38:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]': finished
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume deauthorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume evict, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2083182201, client_metadata.root=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3
Nov 29 00:38:37 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-2083182201,client_metadata.root=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3],prefix=session evict} (starting...)
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume evict, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5ccc350-84d0-463a-8142-2450838c9e41, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5ccc350-84d0-463a-8142-2450838c9e41, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:37 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:37.860+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5ccc350-84d0-463a-8142-2450838c9e41' of type subvolume
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5ccc350-84d0-463a-8142-2450838c9e41' of type subvolume
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41'' moved to trashcan
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 00:38:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 00:38:38 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 00:38:38 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]: dispatch
Nov 29 00:38:38 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]': finished
Nov 29 00:38:38 np0005539482 nova_compute[254898]: 2025-11-29 05:38:38.711 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:38 np0005539482 nova_compute[254898]: 2025-11-29 05:38:38.711 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:38 np0005539482 nova_compute[254898]: 2025-11-29 05:38:38.711 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:38:38 np0005539482 nova_compute[254898]: 2025-11-29 05:38:38.712 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:38:38 np0005539482 nova_compute[254898]: 2025-11-29 05:38:38.730 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:38:38 np0005539482 nova_compute[254898]: 2025-11-29 05:38:38.730 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:38:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:38:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:39 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "format": "json"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:39.327+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c79e8b2-8385-4693-a845-3fe4aa3849bb' of type subvolume
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c79e8b2-8385-4693-a845-3fe4aa3849bb' of type subvolume
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb'' moved to trashcan
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "format": "json"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0940c3a8-0a26-4b45-8cd5-2278d86a8159' of type subvolume
Nov 29 00:38:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:39.446+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0940c3a8-0a26-4b45-8cd5-2278d86a8159' of type subvolume
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159'' moved to trashcan
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 00:38:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 136 KiB/s wr, 14 op/s
Nov 29 00:38:40 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:40 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:38:40 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:38:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:38:41
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:38:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 104 KiB/s wr, 11 op/s
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1460327761
Nov 29 00:38:42 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:38:42.475 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:38:42 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:38:42.477 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:38:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:42 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0/.meta.tmp'
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0/.meta.tmp' to config b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0/.meta'
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "format": "json"}]: dispatch
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 00:38:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 00:38:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 103 KiB/s wr, 11 op/s
Nov 29 00:38:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 133 KiB/s wr, 15 op/s
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:38:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:38:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:38:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:46 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:46 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "format": "json"}]: dispatch
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:38:47 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:47.144+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2efbb8e6-d3d3-430b-8165-2af4490ffea0' of type subvolume
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2efbb8e6-d3d3-430b-8165-2af4490ffea0' of type subvolume
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:38:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:38:47 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0'' moved to trashcan
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:38:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 00:38:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 96 KiB/s wr, 11 op/s
Nov 29 00:38:48 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:38:48.480 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 116 KiB/s wr, 13 op/s
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp'
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp' to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta'
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "format": "json"}]: dispatch
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:50 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0001800252938399427 of space, bias 4.0, pg target 0.21603035260793124 quantized to 16 (current 16)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:38:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:38:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 50 KiB/s wr, 7 op/s
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2", "format": "json"}]: dispatch
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:38:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:38:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:38:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:38:53 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:38:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:38:54 np0005539482 podman[265581]: 2025-11-29 05:38:54.019915459 +0000 UTC m=+0.077428643 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 00:38:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 50 KiB/s wr, 6 op/s
Nov 29 00:38:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:38:54 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:38:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:38:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 53 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 89 KiB/s wr, 11 op/s
Nov 29 00:38:57 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:38:57 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:57 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:38:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:38:57 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:38:58 np0005539482 podman[265601]: 2025-11-29 05:38:58.04180267 +0000 UTC m=+0.093405720 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 53 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 6 op/s
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2_ef38678e-a0e9-4751-9f3b-809b04461abf", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2_ef38678e-a0e9-4751-9f3b-809b04461abf, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp'
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp' to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta'
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2_ef38678e-a0e9-4751-9f3b-809b04461abf, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2", "force": true, "format": "json"}]: dispatch
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp'
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp' to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta'
Nov 29 00:38:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:39:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 84 KiB/s wr, 9 op/s
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:00 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ad665ca2-66c7-4b31-b888-ec01e05fe420 does not exist
Nov 29 00:39:00 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 5bd038e2-18e9-406f-acf3-8d32515c3b49 does not exist
Nov 29 00:39:00 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 27b6421f-db6b-4424-afb1-a3a7fd36c04b does not exist
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:39:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:01 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:01 np0005539482 podman[266021]: 2025-11-29 05:39:01.636861978 +0000 UTC m=+0.033090782 container create b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:39:01 np0005539482 systemd[1]: Started libpod-conmon-b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99.scope.
Nov 29 00:39:01 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:39:01 np0005539482 podman[266021]: 2025-11-29 05:39:01.622604753 +0000 UTC m=+0.018833587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:39:01 np0005539482 podman[266021]: 2025-11-29 05:39:01.720908111 +0000 UTC m=+0.117136995 container init b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:39:01 np0005539482 podman[266021]: 2025-11-29 05:39:01.726974667 +0000 UTC m=+0.123203491 container start b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 00:39:01 np0005539482 podman[266021]: 2025-11-29 05:39:01.730432461 +0000 UTC m=+0.126661365 container attach b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:39:01 np0005539482 peaceful_driscoll[266037]: 167 167
Nov 29 00:39:01 np0005539482 systemd[1]: libpod-b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99.scope: Deactivated successfully.
Nov 29 00:39:01 np0005539482 podman[266021]: 2025-11-29 05:39:01.735891354 +0000 UTC m=+0.132120188 container died b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:39:01 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d24688ffd8105db34b41dcc7b0c55af7b2833a7d46a74f835ba431b21bb55ecc-merged.mount: Deactivated successfully.
Nov 29 00:39:01 np0005539482 podman[266021]: 2025-11-29 05:39:01.780673447 +0000 UTC m=+0.176902261 container remove b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:39:01 np0005539482 systemd[1]: libpod-conmon-b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99.scope: Deactivated successfully.
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:01 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 00:39:01 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 00:39:01 np0005539482 podman[266061]: 2025-11-29 05:39:01.977523408 +0000 UTC m=+0.044328183 container create 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa/.meta.tmp'
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa/.meta.tmp' to config b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa/.meta'
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "format": "json"}]: dispatch
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 00:39:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:02 np0005539482 systemd[1]: Started libpod-conmon-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope.
Nov 29 00:39:02 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:39:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:02 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:02 np0005539482 podman[266061]: 2025-11-29 05:39:01.956601912 +0000 UTC m=+0.023406737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 7 op/s
Nov 29 00:39:02 np0005539482 podman[266061]: 2025-11-29 05:39:02.060621368 +0000 UTC m=+0.127426173 container init 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:39:02 np0005539482 podman[266061]: 2025-11-29 05:39:02.066135652 +0000 UTC m=+0.132940437 container start 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:39:02 np0005539482 podman[266061]: 2025-11-29 05:39:02.069232867 +0000 UTC m=+0.136037672 container attach 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "46ac263c-91aa-4770-862a-dd35f490382b", "format": "json"}]: dispatch
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:46ac263c-91aa-4770-862a-dd35f490382b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:46ac263c-91aa-4770-862a-dd35f490382b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:02 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:02.147+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46ac263c-91aa-4770-862a-dd35f490382b' of type subvolume
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46ac263c-91aa-4770-862a-dd35f490382b' of type subvolume
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b'' moved to trashcan
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 00:39:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 29 00:39:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 00:39:02 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 29 00:39:03 np0005539482 objective_jang[266077]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:39:03 np0005539482 objective_jang[266077]: --> relative data size: 1.0
Nov 29 00:39:03 np0005539482 objective_jang[266077]: --> All data devices are unavailable
Nov 29 00:39:03 np0005539482 systemd[1]: libpod-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope: Deactivated successfully.
Nov 29 00:39:03 np0005539482 podman[266061]: 2025-11-29 05:39:03.144195488 +0000 UTC m=+1.211000313 container died 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:39:03 np0005539482 systemd[1]: libpod-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope: Consumed 1.002s CPU time.
Nov 29 00:39:03 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f-merged.mount: Deactivated successfully.
Nov 29 00:39:03 np0005539482 podman[266061]: 2025-11-29 05:39:03.206840383 +0000 UTC m=+1.273645168 container remove 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:39:03 np0005539482 systemd[1]: libpod-conmon-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope: Deactivated successfully.
Nov 29 00:39:03 np0005539482 podman[266260]: 2025-11-29 05:39:03.835739754 +0000 UTC m=+0.046103775 container create 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 00:39:03 np0005539482 systemd[1]: Started libpod-conmon-9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199.scope.
Nov 29 00:39:03 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:39:03 np0005539482 podman[266260]: 2025-11-29 05:39:03.904234831 +0000 UTC m=+0.114598842 container init 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:39:03 np0005539482 podman[266260]: 2025-11-29 05:39:03.810176746 +0000 UTC m=+0.020540817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:39:03 np0005539482 podman[266260]: 2025-11-29 05:39:03.911927118 +0000 UTC m=+0.122291109 container start 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:39:03 np0005539482 podman[266260]: 2025-11-29 05:39:03.914962561 +0000 UTC m=+0.125326572 container attach 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:39:03 np0005539482 nifty_mestorf[266276]: 167 167
Nov 29 00:39:03 np0005539482 systemd[1]: libpod-9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199.scope: Deactivated successfully.
Nov 29 00:39:03 np0005539482 podman[266260]: 2025-11-29 05:39:03.916508828 +0000 UTC m=+0.126872819 container died 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:39:03 np0005539482 systemd[1]: var-lib-containers-storage-overlay-80d838b1d52f2bda0ef28a1e27263cd5d5d4e076ebe36bf3d998a982377eb150-merged.mount: Deactivated successfully.
Nov 29 00:39:03 np0005539482 podman[266260]: 2025-11-29 05:39:03.957565921 +0000 UTC m=+0.167929952 container remove 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 00:39:03 np0005539482 systemd[1]: libpod-conmon-9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199.scope: Deactivated successfully.
Nov 29 00:39:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 76 KiB/s wr, 9 op/s
Nov 29 00:39:04 np0005539482 podman[266300]: 2025-11-29 05:39:04.098777107 +0000 UTC m=+0.036999686 container create 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:39:04 np0005539482 systemd[1]: Started libpod-conmon-6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f.scope.
Nov 29 00:39:04 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:39:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:04 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:04 np0005539482 podman[266300]: 2025-11-29 05:39:04.082734509 +0000 UTC m=+0.020957108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:39:04 np0005539482 podman[266300]: 2025-11-29 05:39:04.184724296 +0000 UTC m=+0.122946935 container init 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:39:04 np0005539482 podman[266300]: 2025-11-29 05:39:04.195709822 +0000 UTC m=+0.133932401 container start 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:39:04 np0005539482 podman[266300]: 2025-11-29 05:39:04.19849433 +0000 UTC m=+0.136716979 container attach 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 00:39:04 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:04 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:04 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:04 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]: {
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:    "0": [
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:        {
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "devices": [
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "/dev/loop3"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            ],
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_name": "ceph_lv0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_size": "21470642176",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "name": "ceph_lv0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "tags": {
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cluster_name": "ceph",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.crush_device_class": "",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.encrypted": "0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osd_id": "0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.type": "block",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.vdo": "0"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            },
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "type": "block",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "vg_name": "ceph_vg0"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:        }
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:    ],
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:    "1": [
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:        {
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "devices": [
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "/dev/loop4"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            ],
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_name": "ceph_lv1",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_size": "21470642176",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "name": "ceph_lv1",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "tags": {
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cluster_name": "ceph",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.crush_device_class": "",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.encrypted": "0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osd_id": "1",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.type": "block",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.vdo": "0"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            },
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "type": "block",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "vg_name": "ceph_vg1"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:        }
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:    ],
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:    "2": [
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:        {
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "devices": [
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "/dev/loop5"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            ],
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_name": "ceph_lv2",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_size": "21470642176",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "name": "ceph_lv2",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "tags": {
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.cluster_name": "ceph",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.crush_device_class": "",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.encrypted": "0",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osd_id": "2",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.type": "block",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:                "ceph.vdo": "0"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            },
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "type": "block",
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:            "vg_name": "ceph_vg2"
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:        }
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]:    ]
Nov 29 00:39:04 np0005539482 dreamy_shtern[266317]: }
Nov 29 00:39:04 np0005539482 systemd[1]: libpod-6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f.scope: Deactivated successfully.
Nov 29 00:39:04 np0005539482 podman[266300]: 2025-11-29 05:39:04.957133269 +0000 UTC m=+0.895355858 container died 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:04 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637-merged.mount: Deactivated successfully.
Nov 29 00:39:05 np0005539482 podman[266300]: 2025-11-29 05:39:05.001342669 +0000 UTC m=+0.939565248 container remove 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:39:05 np0005539482 systemd[1]: libpod-conmon-6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f.scope: Deactivated successfully.
Nov 29 00:39:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:05 np0005539482 podman[266482]: 2025-11-29 05:39:05.539540956 +0000 UTC m=+0.041435302 container create 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:39:05 np0005539482 systemd[1]: Started libpod-conmon-751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3.scope.
Nov 29 00:39:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:39:05 np0005539482 podman[266482]: 2025-11-29 05:39:05.589217878 +0000 UTC m=+0.091112224 container init 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:39:05 np0005539482 podman[266482]: 2025-11-29 05:39:05.59509737 +0000 UTC m=+0.096991696 container start 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:39:05 np0005539482 podman[266482]: 2025-11-29 05:39:05.598644647 +0000 UTC m=+0.100539003 container attach 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:39:05 np0005539482 funny_kilby[266498]: 167 167
Nov 29 00:39:05 np0005539482 systemd[1]: libpod-751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3.scope: Deactivated successfully.
Nov 29 00:39:05 np0005539482 podman[266482]: 2025-11-29 05:39:05.599666371 +0000 UTC m=+0.101560707 container died 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 00:39:05 np0005539482 podman[266482]: 2025-11-29 05:39:05.519280927 +0000 UTC m=+0.021175313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:39:05 np0005539482 systemd[1]: var-lib-containers-storage-overlay-fdd9c4fea18c4173057f9700c93e33590b2b59a8c8d9918b1dead178755b9fc9-merged.mount: Deactivated successfully.
Nov 29 00:39:05 np0005539482 podman[266482]: 2025-11-29 05:39:05.637185558 +0000 UTC m=+0.139079894 container remove 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:39:05 np0005539482 systemd[1]: libpod-conmon-751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3.scope: Deactivated successfully.
Nov 29 00:39:05 np0005539482 podman[266522]: 2025-11-29 05:39:05.81293116 +0000 UTC m=+0.055404702 container create a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 29 00:39:05 np0005539482 systemd[1]: Started libpod-conmon-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope.
Nov 29 00:39:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:39:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:05 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:39:05 np0005539482 podman[266522]: 2025-11-29 05:39:05.797311721 +0000 UTC m=+0.039785273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:39:05 np0005539482 podman[266522]: 2025-11-29 05:39:05.901885961 +0000 UTC m=+0.144359493 container init a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:39:05 np0005539482 podman[266522]: 2025-11-29 05:39:05.912866537 +0000 UTC m=+0.155340079 container start a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:39:05 np0005539482 podman[266522]: 2025-11-29 05:39:05.916152636 +0000 UTC m=+0.158626198 container attach a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:39:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]: {
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "osd_id": 0,
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "type": "bluestore"
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:    },
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "osd_id": 1,
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "type": "bluestore"
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:    },
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "osd_id": 2,
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:        "type": "bluestore"
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]:    }
Nov 29 00:39:06 np0005539482 gallant_dubinsky[266538]: }
Nov 29 00:39:06 np0005539482 systemd[1]: libpod-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope: Deactivated successfully.
Nov 29 00:39:06 np0005539482 conmon[266538]: conmon a8c95b8211b76115d6e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope/container/memory.events
Nov 29 00:39:06 np0005539482 podman[266522]: 2025-11-29 05:39:06.808979612 +0000 UTC m=+1.051453144 container died a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:39:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 00:39:09 np0005539482 systemd[1]: var-lib-containers-storage-overlay-e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40-merged.mount: Deactivated successfully.
Nov 29 00:39:09 np0005539482 podman[266522]: 2025-11-29 05:39:09.629431343 +0000 UTC m=+3.871904875 container remove a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:39:09 np0005539482 systemd[1]: libpod-conmon-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope: Deactivated successfully.
Nov 29 00:39:09 np0005539482 podman[266582]: 2025-11-29 05:39:09.664039161 +0000 UTC m=+1.705171606 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev c6022f75-d48d-43cc-9264-a6616bc9d011 does not exist
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 284a4067-78d0-496c-a010-29f6db773d94 does not exist
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:39:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:39:09 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:09 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp'
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp' to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta'
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "format": "json"}]: dispatch
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "format": "json"}]: dispatch
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:651b4bb8-257f-4b27-8e91-4460977c10fa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:651b4bb8-257f-4b27-8e91-4460977c10fa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:10.216+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '651b4bb8-257f-4b27-8e91-4460977c10fa' of type subvolume
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '651b4bb8-257f-4b27-8e91-4460977c10fa' of type subvolume
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa'' moved to trashcan
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:39:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:39:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:39:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:39:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:39:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:39:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:39:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:39:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 00:39:13 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:39:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:13 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2", "format": "json"}]: dispatch
Nov 29 00:39:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:39:13.754 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:39:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:39:13.754 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:39:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:39:13.755 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:13 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 461 B/s rd, 60 KiB/s wr, 6 op/s
Nov 29 00:39:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:39:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297539908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:39:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:39:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297539908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:39:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 29 00:39:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 00:39:15 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 29 00:39:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 72 KiB/s wr, 8 op/s
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:17 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2_bdaf9ec6-4ab8-492f-a30a-f313b38f5d36", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2_bdaf9ec6-4ab8-492f-a30a-f313b38f5d36, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp'
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp' to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta'
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2_bdaf9ec6-4ab8-492f-a30a-f313b38f5d36, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp'
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp' to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta'
Nov 29 00:39:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 72 KiB/s wr, 8 op/s
Nov 29 00:39:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 00:39:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:20 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:20 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:39:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:20 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:20 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:21 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:21 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:21 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "format": "json"}]: dispatch
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:adc37617-82af-4ff2-b1ff-41acd332035e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:adc37617-82af-4ff2-b1ff-41acd332035e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:21 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:21.155+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adc37617-82af-4ff2-b1ff-41acd332035e' of type subvolume
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adc37617-82af-4ff2-b1ff-41acd332035e' of type subvolume
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e'' moved to trashcan
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/.meta.tmp'
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/.meta.tmp' to config b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/.meta'
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "format": "json"}]: dispatch
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:39:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:39:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 29 00:39:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 00:39:23 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 4 op/s
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:39:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:39:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:39:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03/.meta.tmp'
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03/.meta.tmp' to config b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03/.meta'
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "format": "json"}]: dispatch
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 00:39:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:25 np0005539482 podman[266656]: 2025-11-29 05:39:25.003092163 +0000 UTC m=+0.057360948 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 00:39:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:39:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:39:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 55 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 88 KiB/s wr, 9 op/s
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/.meta.tmp'
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/.meta.tmp' to config b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/.meta'
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "format": "json"}]: dispatch
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:39:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:27 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:27.987892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394767987956, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2381, "num_deletes": 254, "total_data_size": 2856436, "memory_usage": 2913288, "flush_reason": "Manual Compaction"}
Nov 29 00:39:27 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768007419, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2796236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21143, "largest_seqno": 23523, "table_properties": {"data_size": 2785829, "index_size": 6325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25686, "raw_average_key_size": 21, "raw_value_size": 2763316, "raw_average_value_size": 2302, "num_data_blocks": 280, "num_entries": 1200, "num_filter_entries": 1200, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394620, "oldest_key_time": 1764394620, "file_creation_time": 1764394767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 19561 microseconds, and 7347 cpu microseconds.
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.007458) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2796236 bytes OK
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.007481) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.008986) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009019) EVENT_LOG_v1 {"time_micros": 1764394768009014, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2845623, prev total WAL file size 2845623, number of live WAL files 2.
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009719) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2730KB)], [50(7561KB)]
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768009749, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10539189, "oldest_snapshot_seqno": -1}
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5157 keys, 8783966 bytes, temperature: kUnknown
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768064582, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8783966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8747553, "index_size": 22415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 127047, "raw_average_key_size": 24, "raw_value_size": 8652875, "raw_average_value_size": 1677, "num_data_blocks": 934, "num_entries": 5157, "num_filter_entries": 5157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.064818) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8783966 bytes
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.066510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.9 rd, 160.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 5683, records dropped: 526 output_compression: NoCompression
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.066528) EVENT_LOG_v1 {"time_micros": 1764394768066518, "job": 26, "event": "compaction_finished", "compaction_time_micros": 54910, "compaction_time_cpu_micros": 18463, "output_level": 6, "num_output_files": 1, "total_output_size": 8783966, "num_input_records": 5683, "num_output_records": 5157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 55 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 88 KiB/s wr, 9 op/s
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768067048, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768068443, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:28 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "format": "json"}]: dispatch
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5ebeea41-cd85-43e6-b90c-d40733412d03, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5ebeea41-cd85-43e6-b90c-d40733412d03, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5ebeea41-cd85-43e6-b90c-d40733412d03' of type subvolume
Nov 29 00:39:28 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:28.608+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5ebeea41-cd85-43e6-b90c-d40733412d03' of type subvolume
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03'' moved to trashcan
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 00:39:29 np0005539482 podman[266676]: 2025-11-29 05:39:29.027168779 +0000 UTC m=+0.074248068 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 00:39:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 11 op/s
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 29 00:39:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:30 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:39:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:31 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 120 KiB/s wr, 11 op/s
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd/.meta.tmp'
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd/.meta.tmp' to config b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd/.meta'
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "format": "json"}]: dispatch
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:39:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:39:32 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "format": "json"}]: dispatch
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:39:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:32 np0005539482 nova_compute[254898]: 2025-11-29 05:39:32.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:32 np0005539482 nova_compute[254898]: 2025-11-29 05:39:32.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:33 np0005539482 nova_compute[254898]: 2025-11-29 05:39:33.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 10 op/s
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3
Nov 29 00:39:34 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3],prefix=session evict} (starting...)
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:34.373+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7cad0a5-6ce4-4ca6-994f-ac3363a79f14' of type subvolume
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7cad0a5-6ce4-4ca6-994f-ac3363a79f14' of type subvolume
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14'' moved to trashcan
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:34 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:34 np0005539482 nova_compute[254898]: 2025-11-29 05:39:34.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:34 np0005539482 nova_compute[254898]: 2025-11-29 05:39:34.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:34 np0005539482 nova_compute[254898]: 2025-11-29 05:39:34.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "format": "json"}]: dispatch
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45/.meta.tmp'
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45/.meta.tmp' to config b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45/.meta'
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "format": "json"}]: dispatch
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 00:39:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 56 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 119 KiB/s wr, 11 op/s
Nov 29 00:39:36 np0005539482 nova_compute[254898]: 2025-11-29 05:39:36.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/.meta.tmp'
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/.meta.tmp' to config b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/.meta'
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "format": "json"}]: dispatch
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:39:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7231 writes, 27K keys, 7231 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7231 writes, 1573 syncs, 4.60 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1451 writes, 3407 keys, 1451 commit groups, 1.0 writes per commit group, ingest: 1.89 MB, 0.00 MB/s#012Interval WAL: 1451 writes, 597 syncs, 2.43 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:39:37 np0005539482 nova_compute[254898]: 2025-11-29 05:39:37.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:37 np0005539482 nova_compute[254898]: 2025-11-29 05:39:37.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:37 np0005539482 nova_compute[254898]: 2025-11-29 05:39:37.998 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:37.999 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:37.999 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.000 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.000 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 56 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 119 KiB/s wr, 11 op/s
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:38 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:38 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517717071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.460 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.655 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.656 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5082MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.656 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.657 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.726 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.727 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:39:38 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:39:38 np0005539482 nova_compute[254898]: 2025-11-29 05:39:38.750 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "target_sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, target_sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] tracking-id e2d24acc-59c4-4926-91ba-61c4618234e2 for path b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944'
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, target_sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 70cb9e84-4e7b-4e83-b5ff-872d8a0e3944)
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 70cb9e84-4e7b-4e83-b5ff-872d8a0e3944) -- by 0 seconds
Nov 29 00:39:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:39:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650996313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 00:39:39 np0005539482 nova_compute[254898]: 2025-11-29 05:39:39.247 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:39:39 np0005539482 nova_compute[254898]: 2025-11-29 05:39:39.252 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:39:39 np0005539482 nova_compute[254898]: 2025-11-29 05:39:39.264 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:39:39 np0005539482 nova_compute[254898]: 2025-11-29 05:39:39.265 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:39:39 np0005539482 nova_compute[254898]: 2025-11-29 05:39:39.265 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 00:39:40 np0005539482 podman[266773]: 2025-11-29 05:39:40.007414343 +0000 UTC m=+0.060722135 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 00:39:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 141 KiB/s wr, 15 op/s
Nov 29 00:39:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:40 np0005539482 nova_compute[254898]: 2025-11-29 05:39:40.267 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:39:40 np0005539482 nova_compute[254898]: 2025-11-29 05:39:40.267 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:39:40 np0005539482 nova_compute[254898]: 2025-11-29 05:39:40.267 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:39:40 np0005539482 nova_compute[254898]: 2025-11-29 05:39:40.283 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:39:40 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.csskcz(active, since 30m)
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:39:41
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'vms']
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:39:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 520 B/s rd, 120 KiB/s wr, 13 op/s
Nov 29 00:39:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:39:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2999 syncs, 3.64 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3872 writes, 13K keys, 3872 commit groups, 1.0 writes per commit group, ingest: 20.11 MB, 0.03 MB/s#012Interval WAL: 3872 writes, 1699 syncs, 2.28 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.snap/54db2b9e-cb54-440e-8afd-6c23560987db/54e69477-7697-43a2-9122-006fb641f43b' to b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/75c755e3-5ba3-4412-8578-c62be99c7fab'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6/.meta.tmp'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6/.meta.tmp' to config b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6/.meta'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "format": "json"}]: dispatch
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] untracking e2d24acc-59c4-4926-91ba-61c4618234e2
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 70cb9e84-4e7b-4e83-b5ff-872d8a0e3944)
Nov 29 00:39:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 00:39:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 118 KiB/s wr, 12 op/s
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7
Nov 29 00:39:44 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7],prefix=session evict} (starting...)
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "format": "json"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:44.638+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06420fd0-e9c0-463d-9475-8429a0c8fd0d' of type subvolume
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06420fd0-e9c0-463d-9475-8429a0c8fd0d' of type subvolume
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d'' moved to trashcan
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:39:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:39:45 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:45 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 00:39:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 00:39:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 57 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 184 KiB/s wr, 19 op/s
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a/.meta.tmp'
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a/.meta.tmp' to config b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a/.meta'
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "format": "json"}]: dispatch
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:39:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7984 writes, 30K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 7984 writes, 1865 syncs, 4.28 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2353 writes, 6787 keys, 2353 commit groups, 1.0 writes per commit group, ingest: 7.64 MB, 0.01 MB/s#012Interval WAL: 2353 writes, 1005 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/.meta.tmp'
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/.meta.tmp' to config b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/.meta'
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "format": "json"}]: dispatch
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:39:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:39:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 57 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 128 KiB/s wr, 13 op/s
Nov 29 00:39:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:48 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:50 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 00:39:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 178 KiB/s wr, 19 op/s
Nov 29 00:39:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00027146873168587614 of space, bias 4.0, pg target 0.32576247802305136 quantized to 16 (current 16)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 8.266792016669923e-07 of space, bias 1.0, pg target 0.0002480037605000977 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:51.587+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adec6cb7-3928-4a56-9d48-76b4d10cc25a' of type subvolume
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adec6cb7-3928-4a56-9d48-76b4d10cc25a' of type subvolume
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a'' moved to trashcan
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:51 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:51 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 115 KiB/s wr, 13 op/s
Nov 29 00:39:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:39:52 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:39:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 115 KiB/s wr, 12 op/s
Nov 29 00:39:54 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:54 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d
Nov 29 00:39:55 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d],prefix=session evict} (starting...)
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:39:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:39:55 np0005539482 podman[266798]: 2025-11-29 05:39:55.534053212 +0000 UTC m=+0.064929405 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "format": "json"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:55.640+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b16d258-3e4e-4612-860f-4a4dc4e6aef6' of type subvolume
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b16d258-3e4e-4612-860f-4a4dc4e6aef6' of type subvolume
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6'' moved to trashcan
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "format": "json"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:55.736+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dddb87ae-5fcb-4c01-90f6-c57d130f8474' of type subvolume
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dddb87ae-5fcb-4c01-90f6-c57d130f8474' of type subvolume
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474'' moved to trashcan
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 00:39:56 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:39:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:56 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 58 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 150 KiB/s wr, 16 op/s
Nov 29 00:39:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:56 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 58 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 9 op/s
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:39:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:58 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 00:39:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:39:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "format": "json"}]: dispatch
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4fc216c7-7565-440e-ba91-0a6f65473f45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4fc216c7-7565-440e-ba91-0a6f65473f45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:39:58 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:58.660+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fc216c7-7565-440e-ba91-0a6f65473f45' of type subvolume
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fc216c7-7565-440e-ba91-0a6f65473f45' of type subvolume
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "force": true, "format": "json"}]: dispatch
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45'' moved to trashcan
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:39:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:39:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:39:59 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:39:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 159 KiB/s wr, 17 op/s
Nov 29 00:40:00 np0005539482 podman[266822]: 2025-11-29 05:40:00.087734773 +0000 UTC m=+0.132602166 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:40:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 00:40:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 00:40:00 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 00:40:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 11 op/s
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:02.335+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee8187c1-56b3-4603-8456-6c0a4e9f03fd' of type subvolume
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee8187c1-56b3-4603-8456-6c0a4e9f03fd' of type subvolume
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd'' moved to trashcan
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 00:40:02 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:02 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:02 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 11 op/s
Nov 29 00:40:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 151 KiB/s wr, 15 op/s
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:40:06 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:40:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:40:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 117 KiB/s wr, 11 op/s
Nov 29 00:40:09 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:40:09.050 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:40:09 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:40:09.051 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 149 KiB/s wr, 16 op/s
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 00:40:10 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:10 np0005539482 podman[266966]: 2025-11-29 05:40:10.268450757 +0000 UTC m=+0.052925787 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:40:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:40:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:40:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:40:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:40:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:40:11 np0005539482 podman[267261]: 2025-11-29 05:40:11.649281687 +0000 UTC m=+0.037485165 container create c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:40:11 np0005539482 systemd[1]: Started libpod-conmon-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope.
Nov 29 00:40:11 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:11 np0005539482 podman[267261]: 2025-11-29 05:40:11.631485758 +0000 UTC m=+0.019689276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:11 np0005539482 podman[267261]: 2025-11-29 05:40:11.732583724 +0000 UTC m=+0.120787222 container init c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:40:11 np0005539482 podman[267261]: 2025-11-29 05:40:11.739110042 +0000 UTC m=+0.127313520 container start c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:40:11 np0005539482 podman[267261]: 2025-11-29 05:40:11.742284798 +0000 UTC m=+0.130488296 container attach c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 00:40:11 np0005539482 naughty_kare[267278]: 167 167
Nov 29 00:40:11 np0005539482 systemd[1]: libpod-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope: Deactivated successfully.
Nov 29 00:40:11 np0005539482 conmon[267278]: conmon c6e8ed5b8f8432d662e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope/container/memory.events
Nov 29 00:40:11 np0005539482 podman[267261]: 2025-11-29 05:40:11.745715461 +0000 UTC m=+0.133918939 container died c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:40:11 np0005539482 systemd[1]: var-lib-containers-storage-overlay-935431c4a73898ae19faf92d8545da1655cd7b130c475a366114d8976bcc7546-merged.mount: Deactivated successfully.
Nov 29 00:40:11 np0005539482 podman[267261]: 2025-11-29 05:40:11.778508161 +0000 UTC m=+0.166711639 container remove c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:40:11 np0005539482 systemd[1]: libpod-conmon-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope: Deactivated successfully.
Nov 29 00:40:11 np0005539482 podman[267302]: 2025-11-29 05:40:11.963334266 +0000 UTC m=+0.043641513 container create 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:40:12 np0005539482 systemd[1]: Started libpod-conmon-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope.
Nov 29 00:40:12 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:12 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:12 np0005539482 podman[267302]: 2025-11-29 05:40:12.031502099 +0000 UTC m=+0.111809356 container init 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:40:12 np0005539482 podman[267302]: 2025-11-29 05:40:11.941832118 +0000 UTC m=+0.022139395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:12 np0005539482 podman[267302]: 2025-11-29 05:40:12.03734708 +0000 UTC m=+0.117654317 container start 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:40:12 np0005539482 podman[267302]: 2025-11-29 05:40:12.040558937 +0000 UTC m=+0.120866154 container attach 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 00:40:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]: [
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:    {
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "available": false,
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "ceph_device": false,
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "lsm_data": {},
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "lvs": [],
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "path": "/dev/sr0",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "rejected_reasons": [
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "Insufficient space (<5GB)",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "Has a FileSystem"
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        ],
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        "sys_api": {
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "actuators": null,
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "device_nodes": "sr0",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "devname": "sr0",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "human_readable_size": "482.00 KB",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "id_bus": "ata",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "model": "QEMU DVD-ROM",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "nr_requests": "2",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "parent": "/dev/sr0",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "partitions": {},
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "path": "/dev/sr0",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "removable": "1",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "rev": "2.5+",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "ro": "0",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "rotational": "1",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "sas_address": "",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "sas_device_handle": "",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "scheduler_mode": "mq-deadline",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "sectors": 0,
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "sectorsize": "2048",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "size": 493568.0,
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "support_discard": "2048",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "type": "disk",
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:            "vendor": "QEMU"
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:        }
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]:    }
Nov 29 00:40:13 np0005539482 dreamy_torvalds[267319]: ]
Nov 29 00:40:13 np0005539482 systemd[1]: libpod-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope: Deactivated successfully.
Nov 29 00:40:13 np0005539482 systemd[1]: libpod-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope: Consumed 1.401s CPU time.
Nov 29 00:40:13 np0005539482 conmon[267319]: conmon 3ae85c4f40ff48139a75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope/container/memory.events
Nov 29 00:40:13 np0005539482 podman[267302]: 2025-11-29 05:40:13.383369031 +0000 UTC m=+1.463676248 container died 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:40:13 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a-merged.mount: Deactivated successfully.
Nov 29 00:40:13 np0005539482 podman[267302]: 2025-11-29 05:40:13.433153751 +0000 UTC m=+1.513460978 container remove 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:40:13 np0005539482 systemd[1]: libpod-conmon-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope: Deactivated successfully.
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:13 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 3d348d28-eb9e-427c-a8ec-6083bfe53d55 does not exist
Nov 29 00:40:13 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d5bc9f56-9610-499b-8eec-d6f6d8ec10e8 does not exist
Nov 29 00:40:13 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 60d4e170-cf75-4535-8d35-cdd582eefec5 does not exist
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:40:13.755 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:40:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:40:13.756 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:40:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:40:13.756 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:40:13 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:40:14 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:14 np0005539482 podman[269639]: 2025-11-29 05:40:14.06670888 +0000 UTC m=+0.042601307 container create eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:40:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Nov 29 00:40:14 np0005539482 systemd[1]: Started libpod-conmon-eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236.scope.
Nov 29 00:40:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:14 np0005539482 podman[269639]: 2025-11-29 05:40:14.142517698 +0000 UTC m=+0.118410145 container init eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:40:14 np0005539482 podman[269639]: 2025-11-29 05:40:14.048772429 +0000 UTC m=+0.024664866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:14 np0005539482 podman[269639]: 2025-11-29 05:40:14.149861825 +0000 UTC m=+0.125754252 container start eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 00:40:14 np0005539482 podman[269639]: 2025-11-29 05:40:14.153343498 +0000 UTC m=+0.129235925 container attach eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:40:14 np0005539482 hardcore_carver[269656]: 167 167
Nov 29 00:40:14 np0005539482 systemd[1]: libpod-eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236.scope: Deactivated successfully.
Nov 29 00:40:14 np0005539482 podman[269639]: 2025-11-29 05:40:14.154760662 +0000 UTC m=+0.130653089 container died eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:40:14 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b768e45ff28ae80d1be567b649db7070b848ef4de6d78152c73c9ac3c6a5004d-merged.mount: Deactivated successfully.
Nov 29 00:40:14 np0005539482 podman[269639]: 2025-11-29 05:40:14.196046918 +0000 UTC m=+0.171939345 container remove eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:40:14 np0005539482 systemd[1]: libpod-conmon-eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236.scope: Deactivated successfully.
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262537410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262537410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:40:14 np0005539482 podman[269678]: 2025-11-29 05:40:14.363596716 +0000 UTC m=+0.059145077 container create 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 00:40:14 np0005539482 systemd[1]: Started libpod-conmon-2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7.scope.
Nov 29 00:40:14 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:14 np0005539482 podman[269678]: 2025-11-29 05:40:14.335969061 +0000 UTC m=+0.031517502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:14 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:14 np0005539482 podman[269678]: 2025-11-29 05:40:14.458683918 +0000 UTC m=+0.154232289 container init 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:40:14 np0005539482 podman[269678]: 2025-11-29 05:40:14.473974836 +0000 UTC m=+0.169523187 container start 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 00:40:14 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 00:40:14 np0005539482 podman[269678]: 2025-11-29 05:40:14.477230025 +0000 UTC m=+0.172778376 container attach 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:40:15 np0005539482 objective_germain[269694]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:40:15 np0005539482 objective_germain[269694]: --> relative data size: 1.0
Nov 29 00:40:15 np0005539482 objective_germain[269694]: --> All data devices are unavailable
Nov 29 00:40:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:15 np0005539482 systemd[1]: libpod-2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7.scope: Deactivated successfully.
Nov 29 00:40:15 np0005539482 podman[269678]: 2025-11-29 05:40:15.527733184 +0000 UTC m=+1.223281535 container died 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:40:15 np0005539482 systemd[1]: var-lib-containers-storage-overlay-445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc-merged.mount: Deactivated successfully.
Nov 29 00:40:15 np0005539482 podman[269678]: 2025-11-29 05:40:15.579567523 +0000 UTC m=+1.275115874 container remove 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:40:15 np0005539482 systemd[1]: libpod-conmon-2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7.scope: Deactivated successfully.
Nov 29 00:40:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 131 KiB/s wr, 15 op/s
Nov 29 00:40:16 np0005539482 podman[269879]: 2025-11-29 05:40:16.2125919 +0000 UTC m=+0.034347739 container create a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:40:16 np0005539482 systemd[1]: Started libpod-conmon-a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66.scope.
Nov 29 00:40:16 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:16 np0005539482 podman[269879]: 2025-11-29 05:40:16.29183168 +0000 UTC m=+0.113587549 container init a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:40:16 np0005539482 podman[269879]: 2025-11-29 05:40:16.198256024 +0000 UTC m=+0.020011873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:16 np0005539482 podman[269879]: 2025-11-29 05:40:16.302418595 +0000 UTC m=+0.124174434 container start a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:40:16 np0005539482 gracious_vaughan[269895]: 167 167
Nov 29 00:40:16 np0005539482 podman[269879]: 2025-11-29 05:40:16.306134735 +0000 UTC m=+0.127890574 container attach a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:40:16 np0005539482 systemd[1]: libpod-a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66.scope: Deactivated successfully.
Nov 29 00:40:16 np0005539482 podman[269879]: 2025-11-29 05:40:16.30759706 +0000 UTC m=+0.129352899 container died a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 00:40:16 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6c0499c7bf4d6e392645418e2a70e92b06a3f9337d5a9533950f01744cd9591c-merged.mount: Deactivated successfully.
Nov 29 00:40:16 np0005539482 podman[269879]: 2025-11-29 05:40:16.346893927 +0000 UTC m=+0.168649776 container remove a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:40:16 np0005539482 systemd[1]: libpod-conmon-a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66.scope: Deactivated successfully.
Nov 29 00:40:16 np0005539482 podman[269918]: 2025-11-29 05:40:16.508982624 +0000 UTC m=+0.049026703 container create 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:40:16 np0005539482 systemd[1]: Started libpod-conmon-92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1.scope.
Nov 29 00:40:16 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:16 np0005539482 podman[269918]: 2025-11-29 05:40:16.482472935 +0000 UTC m=+0.022517004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:16 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:16 np0005539482 podman[269918]: 2025-11-29 05:40:16.594134906 +0000 UTC m=+0.134178995 container init 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:40:16 np0005539482 podman[269918]: 2025-11-29 05:40:16.604051765 +0000 UTC m=+0.144095854 container start 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:40:16 np0005539482 podman[269918]: 2025-11-29 05:40:16.607510808 +0000 UTC m=+0.147554897 container attach 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]: {
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:    "0": [
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:        {
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "devices": [
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "/dev/loop3"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            ],
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_name": "ceph_lv0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_size": "21470642176",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "name": "ceph_lv0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "tags": {
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cluster_name": "ceph",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.crush_device_class": "",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.encrypted": "0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osd_id": "0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.type": "block",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.vdo": "0"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            },
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "type": "block",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "vg_name": "ceph_vg0"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:        }
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:    ],
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:    "1": [
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:        {
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "devices": [
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "/dev/loop4"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            ],
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_name": "ceph_lv1",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_size": "21470642176",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "name": "ceph_lv1",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "tags": {
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cluster_name": "ceph",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.crush_device_class": "",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.encrypted": "0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osd_id": "1",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.type": "block",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.vdo": "0"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            },
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "type": "block",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "vg_name": "ceph_vg1"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:        }
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:    ],
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:    "2": [
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:        {
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "devices": [
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "/dev/loop5"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            ],
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_name": "ceph_lv2",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_size": "21470642176",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "name": "ceph_lv2",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "tags": {
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.cluster_name": "ceph",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.crush_device_class": "",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.encrypted": "0",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osd_id": "2",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.type": "block",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:                "ceph.vdo": "0"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            },
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "type": "block",
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:            "vg_name": "ceph_vg2"
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:        }
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]:    ]
Nov 29 00:40:17 np0005539482 hardcore_perlman[269934]: }
Nov 29 00:40:17 np0005539482 systemd[1]: libpod-92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1.scope: Deactivated successfully.
Nov 29 00:40:17 np0005539482 podman[269918]: 2025-11-29 05:40:17.415898161 +0000 UTC m=+0.955942250 container died 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:40:17 np0005539482 systemd[1]: var-lib-containers-storage-overlay-83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392-merged.mount: Deactivated successfully.
Nov 29 00:40:17 np0005539482 podman[269918]: 2025-11-29 05:40:17.4805579 +0000 UTC m=+1.020601939 container remove 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:40:17 np0005539482 systemd[1]: libpod-conmon-92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1.scope: Deactivated successfully.
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:17 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 00:40:17 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 89 KiB/s wr, 10 op/s
Nov 29 00:40:18 np0005539482 podman[270098]: 2025-11-29 05:40:18.014183611 +0000 UTC m=+0.033009697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:18 np0005539482 podman[270098]: 2025-11-29 05:40:18.169964166 +0000 UTC m=+0.188790232 container create fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:40:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:18 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:18 np0005539482 systemd[1]: Started libpod-conmon-fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8.scope.
Nov 29 00:40:18 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:18 np0005539482 podman[270098]: 2025-11-29 05:40:18.263581862 +0000 UTC m=+0.282408008 container init fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:40:18 np0005539482 podman[270098]: 2025-11-29 05:40:18.2751007 +0000 UTC m=+0.293926766 container start fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:40:18 np0005539482 xenodochial_zhukovsky[270115]: 167 167
Nov 29 00:40:18 np0005539482 systemd[1]: libpod-fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8.scope: Deactivated successfully.
Nov 29 00:40:18 np0005539482 podman[270098]: 2025-11-29 05:40:18.278488211 +0000 UTC m=+0.297314297 container attach fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:40:18 np0005539482 podman[270098]: 2025-11-29 05:40:18.279776922 +0000 UTC m=+0.298602988 container died fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:40:18 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3226dc7d43f2a3e8009327f3f142e76e28090a4e53c74da84ac7970dfe634f66-merged.mount: Deactivated successfully.
Nov 29 00:40:18 np0005539482 podman[270098]: 2025-11-29 05:40:18.313613178 +0000 UTC m=+0.332439234 container remove fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:40:18 np0005539482 systemd[1]: libpod-conmon-fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8.scope: Deactivated successfully.
Nov 29 00:40:18 np0005539482 podman[270139]: 2025-11-29 05:40:18.453167561 +0000 UTC m=+0.035260291 container create 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:40:18 np0005539482 systemd[1]: Started libpod-conmon-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope.
Nov 29 00:40:18 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:40:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:18 np0005539482 podman[270139]: 2025-11-29 05:40:18.437997595 +0000 UTC m=+0.020090345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:40:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:18 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:40:18 np0005539482 podman[270139]: 2025-11-29 05:40:18.551330217 +0000 UTC m=+0.133422977 container init 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 00:40:18 np0005539482 podman[270139]: 2025-11-29 05:40:18.556220565 +0000 UTC m=+0.138313305 container start 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 00:40:18 np0005539482 podman[270139]: 2025-11-29 05:40:18.559031513 +0000 UTC m=+0.141124283 container attach 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:40:19 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 00:40:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:19 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:40:19.053 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]: {
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "osd_id": 0,
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "type": "bluestore"
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:    },
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "osd_id": 1,
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "type": "bluestore"
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:    },
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "osd_id": 2,
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:        "type": "bluestore"
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]:    }
Nov 29 00:40:19 np0005539482 trusting_hopper[270156]: }
Nov 29 00:40:19 np0005539482 systemd[1]: libpod-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope: Deactivated successfully.
Nov 29 00:40:19 np0005539482 conmon[270156]: conmon 5fb0c829e88695c8deb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope/container/memory.events
Nov 29 00:40:19 np0005539482 podman[270139]: 2025-11-29 05:40:19.530682362 +0000 UTC m=+1.112775102 container died 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:40:19 np0005539482 systemd[1]: var-lib-containers-storage-overlay-fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f-merged.mount: Deactivated successfully.
Nov 29 00:40:19 np0005539482 podman[270139]: 2025-11-29 05:40:19.592244115 +0000 UTC m=+1.174336855 container remove 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 00:40:19 np0005539482 systemd[1]: libpod-conmon-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope: Deactivated successfully.
Nov 29 00:40:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:40:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:40:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:19 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 2b960a0e-17cf-4744-b61d-22f67967eca6 does not exist
Nov 29 00:40:19 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e47d686d-b5a6-4b50-993c-8415dbeb91db does not exist
Nov 29 00:40:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 120 KiB/s wr, 14 op/s
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.521937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820521982, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1205, "num_deletes": 257, "total_data_size": 1289445, "memory_usage": 1320472, "flush_reason": "Manual Compaction"}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820534015, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1252031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23524, "largest_seqno": 24728, "table_properties": {"data_size": 1246316, "index_size": 2855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14382, "raw_average_key_size": 20, "raw_value_size": 1233799, "raw_average_value_size": 1745, "num_data_blocks": 127, "num_entries": 707, "num_filter_entries": 707, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394768, "oldest_key_time": 1764394768, "file_creation_time": 1764394820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12124 microseconds, and 4325 cpu microseconds.
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.534062) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1252031 bytes OK
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.534080) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536010) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536027) EVENT_LOG_v1 {"time_micros": 1764394820536021, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1283333, prev total WAL file size 1283333, number of live WAL files 2.
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.537081) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1222KB)], [53(8578KB)]
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820537154, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10035997, "oldest_snapshot_seqno": -1}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5327 keys, 9940487 bytes, temperature: kUnknown
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820627997, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9940487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9901125, "index_size": 24916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 132423, "raw_average_key_size": 24, "raw_value_size": 9801742, "raw_average_value_size": 1840, "num_data_blocks": 1040, "num_entries": 5327, "num_filter_entries": 5327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.628207) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9940487 bytes
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.695079) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.4 rd, 109.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.4 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(16.0) write-amplify(7.9) OK, records in: 5864, records dropped: 537 output_compression: NoCompression
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.695117) EVENT_LOG_v1 {"time_micros": 1764394820695099, "job": 28, "event": "compaction_finished", "compaction_time_micros": 90901, "compaction_time_cpu_micros": 41767, "output_level": 6, "num_output_files": 1, "total_output_size": 9940487, "num_input_records": 5864, "num_output_records": 5327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820695767, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820698776, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:40:20 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:40:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 10 op/s
Nov 29 00:40:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:22 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 00:40:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/.meta.tmp'
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/.meta.tmp' to config b'/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/.meta'
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944'' moved to trashcan
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:40:23 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 10 op/s
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 00:40:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:24 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 00:40:24 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 00:40:25 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]} v 0) v1
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]: dispatch
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]': finished
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]: dispatch
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]': finished
Nov 29 00:40:25 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:26 np0005539482 podman[270257]: 2025-11-29 05:40:26.06785121 +0000 UTC m=+0.104783417 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:40:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 61 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 122 KiB/s wr, 13 op/s
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db_9d58da62-529e-4378-9a77-682165217cf5", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db_9d58da62-529e-4378-9a77-682165217cf5, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db_9d58da62-529e-4378-9a77-682165217cf5, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 00:40:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 61 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 65 KiB/s wr, 6 op/s
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp'
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp' to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta'
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "format": "json"}]: dispatch
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]} v 0) v1
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]': finished
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49
Nov 29 00:40:29 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49],prefix=session evict} (starting...)
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]': finished
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "format": "json"}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a98b9fa5-d939-4fac-9215-346a94abca4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a98b9fa5-d939-4fac-9215-346a94abca4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:29 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:29.244+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a98b9fa5-d939-4fac-9215-346a94abca4f' of type subvolume
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a98b9fa5-d939-4fac-9215-346a94abca4f' of type subvolume
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f'' moved to trashcan
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:40:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 129 KiB/s wr, 14 op/s
Nov 29 00:40:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "format": "json"}]: dispatch
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:30 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:30.764+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '779d5f7d-4b59-47d7-ae31-6662b5ea257d' of type subvolume
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '779d5f7d-4b59-47d7-ae31-6662b5ea257d' of type subvolume
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d'' moved to trashcan
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:40:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 00:40:30 np0005539482 nova_compute[254898]: 2025-11-29 05:40:30.966 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:31 np0005539482 podman[270279]: 2025-11-29 05:40:31.093040317 +0000 UTC m=+0.140435876 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 00:40:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8", "format": "json"}]: dispatch
Nov 29 00:40:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 97 KiB/s wr, 10 op/s
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 00:40:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) v1
Nov 29 00:40:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 29 00:40:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 00:40:32 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 00:40:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 29 00:40:33 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 29 00:40:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 29 00:40:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 00:40:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 29 00:40:33 np0005539482 nova_compute[254898]: 2025-11-29 05:40:33.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 117 KiB/s wr, 12 op/s
Nov 29 00:40:34 np0005539482 nova_compute[254898]: 2025-11-29 05:40:34.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:34 np0005539482 nova_compute[254898]: 2025-11-29 05:40:34.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:34 np0005539482 nova_compute[254898]: 2025-11-29 05:40:34.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:34 np0005539482 nova_compute[254898]: 2025-11-29 05:40:34.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:40:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:35 np0005539482 nova_compute[254898]: 2025-11-29 05:40:35.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 124 KiB/s wr, 14 op/s
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "format": "json"}]: dispatch
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:36 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:36.335+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '848ba3c8-c30f-497b-9372-9c6fce9360b1' of type subvolume
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '848ba3c8-c30f-497b-9372-9c6fce9360b1' of type subvolume
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1'' moved to trashcan
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:40:36 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 00:40:36 np0005539482 nova_compute[254898]: 2025-11-29 05:40:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8_88f00d98-7609-4a65-a545-d92208fb556e", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8_88f00d98-7609-4a65-a545-d92208fb556e, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp'
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp' to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta'
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8_88f00d98-7609-4a65-a545-d92208fb556e, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp'
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp' to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta'
Nov 29 00:40:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 124 KiB/s wr, 14 op/s
Nov 29 00:40:38 np0005539482 nova_compute[254898]: 2025-11-29 05:40:38.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:38 np0005539482 nova_compute[254898]: 2025-11-29 05:40:38.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:40:38 np0005539482 nova_compute[254898]: 2025-11-29 05:40:38.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:40:38 np0005539482 nova_compute[254898]: 2025-11-29 05:40:38.973 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:40:38 np0005539482 nova_compute[254898]: 2025-11-29 05:40:38.973 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.005 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.005 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.005 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.006 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:40:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:40:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558097515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.404 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.583 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.584 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5104MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.584 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.584 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.665 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.665 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:40:39 np0005539482 nova_compute[254898]: 2025-11-29 05:40:39.688 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:40:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:40:40 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391827870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 70 KiB/s wr, 8 op/s
Nov 29 00:40:40 np0005539482 nova_compute[254898]: 2025-11-29 05:40:40.104 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:40:40 np0005539482 nova_compute[254898]: 2025-11-29 05:40:40.109 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:40:40 np0005539482 nova_compute[254898]: 2025-11-29 05:40:40.129 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:40:40 np0005539482 nova_compute[254898]: 2025-11-29 05:40:40.130 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:40:40 np0005539482 nova_compute[254898]: 2025-11-29 05:40:40.130 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:40:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 29 00:40:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 00:40:40 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "format": "json"}]: dispatch
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:40:40 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:40.940+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '163fafb9-e2a0-4bac-af62-6ce4faca289f' of type subvolume
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '163fafb9-e2a0-4bac-af62-6ce4faca289f' of type subvolume
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f'' moved to trashcan
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:40:40 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 00:40:41 np0005539482 podman[270350]: 2025-11-29 05:40:41.006321323 +0000 UTC m=+0.050011136 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:40:41 np0005539482 nova_compute[254898]: 2025-11-29 05:40:41.127 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/.meta.tmp'
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/.meta.tmp' to config b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/.meta'
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "format": "json"}]: dispatch
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:40:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:40:41
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data']
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f98b8ee0>)]
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 00:40:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 29 00:40:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 00:40:41 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:40:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:40:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 88 KiB/s wr, 10 op/s
Nov 29 00:40:42 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.csskcz(active, since 32m)
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "format": "json"}]: dispatch
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 27 KiB/s wr, 3 op/s
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/.meta.tmp'
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/.meta.tmp' to config b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/.meta'
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "format": "json"}]: dispatch
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:40:44 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:40:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 00:40:45 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID Joe with tenant 4e135fffa1e64bf8b2e43bd33b51cf15
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 84 KiB/s wr, 97 op/s
Nov 29 00:40:47 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4", "format": "json"}]: dispatch
Nov 29 00:40:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:47 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 56 KiB/s wr, 93 op/s
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/.meta.tmp'
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/.meta.tmp' to config b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/.meta'
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "format": "json"}]: dispatch
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:40:49 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:40:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 87 KiB/s wr, 81 op/s
Nov 29 00:40:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 29 00:40:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 00:40:50 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 29 00:40:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9", "format": "json"}]: dispatch
Nov 29 00:40:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00036214908103796316 of space, bias 4.0, pg target 0.43457889724555576 quantized to 16 (current 16)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:40:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 83 KiB/s wr, 77 op/s
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp'
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp' to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta'
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "format": "json"}]: dispatch
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:40:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 00:40:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 29 00:40:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 00:40:52 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:52.792+0000 7fa4c75e5640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 29 00:40:52 np0005539482 ceph-mgr[75473]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 29 00:40:53 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 00:40:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 83 KiB/s wr, 77 op/s
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9_b15f858e-3e57-4b58-8d27-5b096ba3f743", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9_b15f858e-3e57-4b58-8d27-5b096ba3f743, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9_b15f858e-3e57-4b58-8d27-5b096ba3f743, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c", "format": "json"}]: dispatch
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s wr, 4 op/s
Nov 29 00:40:56 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:40:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 00:40:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"} v 0) v1
Nov 29 00:40:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 00:40:56 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-2011883581 with tenant e97b8963e55a4094b1cb702d19d887ba
Nov 29 00:40:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:40:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 00:40:56 np0005539482 podman[270369]: 2025-11-29 05:40:56.993237515 +0000 UTC m=+0.047399124 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 00:40:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 00:40:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:40:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:40:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s wr, 4 op/s
Nov 29 00:40:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 29 00:40:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 00:40:58 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 29 00:40:58 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec", "format": "json"}]: dispatch
Nov 29 00:40:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:58 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c_5fc56f62-3563-4637-98c4-f1c64fe4cf32", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c_5fc56f62-3563-4637-98c4-f1c64fe4cf32, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp'
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp' to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta'
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c_5fc56f62-3563-4637-98c4-f1c64fe4cf32, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c", "force": true, "format": "json"}]: dispatch
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp'
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp' to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta'
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'fb7c7b44-2af1-44fc-8694-006120ff8320'
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84
Nov 29 00:40:59 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84],prefix=session evict} (starting...)
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:40:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:41:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 216 B/s rd, 73 KiB/s wr, 5 op/s
Nov 29 00:41:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 29 00:41:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 00:41:00 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 29 00:41:02 np0005539482 podman[270390]: 2025-11-29 05:41:02.073788724 +0000 UTC m=+0.128810425 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:41:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 6 op/s
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:03.322+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ff274bb-e3ac-4d57-8489-1cecf428692d' of type subvolume
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ff274bb-e3ac-4d57-8489-1cecf428692d' of type subvolume
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d'' moved to trashcan
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"} v 0) v1
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"} v 0) v1
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]': finished
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2011883581, client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84
Nov 29 00:41:03 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-2011883581,client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84],prefix=session evict} (starting...)
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]': finished
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec_49b59679-70ca-40e8-b8a9-078b9d51d09b", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec_49b59679-70ca-40e8-b8a9-078b9d51d09b, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec_49b59679-70ca-40e8-b8a9-078b9d51d09b, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 4 op/s
Nov 29 00:41:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 142 KiB/s wr, 11 op/s
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp'
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp' to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta'
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "format": "json"}]: dispatch
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:41:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) v1
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5
Nov 29 00:41:07 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5],prefix=session evict} (starting...)
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276", "format": "json"}]: dispatch
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 412 B/s rd, 115 KiB/s wr, 9 op/s
Nov 29 00:41:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 29 00:41:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 00:41:08 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 757 B/s rd, 132 KiB/s wr, 10 op/s
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828", "format": "json"}]: dispatch
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "admin", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 00:41:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) v1
Nov 29 00:41:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 00:41:10 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:10.578+0000 7fa4c75e5640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 29 00:41:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 29 00:41:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 00:41:10 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276_771e373d-f46c-4323-b1c4-c8472b9b21b1", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276_771e373d-f46c-4323-b1c4-c8472b9b21b1, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276_771e373d-f46c-4323-b1c4-c8472b9b21b1, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:11 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f976c040>)]
Nov 29 00:41:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 00:41:12 np0005539482 podman[270419]: 2025-11-29 05:41:12.02342074 +0000 UTC m=+0.071406142 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 00:41:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 155 KiB/s wr, 12 op/s
Nov 29 00:41:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 29 00:41:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 00:41:13 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 29 00:41:13 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.csskcz(active, since 32m)
Nov 29 00:41:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:41:13.756 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:41:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:41:13.757 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:41:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:41:13.757 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 77 KiB/s wr, 7 op/s
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1950147048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1950147048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID david with tenant 4e135fffa1e64bf8b2e43bd33b51cf15
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2", "format": "json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828_74a96009-fd82-4ae6-b743-29ffffe9710a", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828_74a96009-fd82-4ae6-b743-29ffffe9710a, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp'
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp' to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta'
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828_74a96009-fd82-4ae6-b743-29ffffe9710a, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp'
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp' to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta'
Nov 29 00:41:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 00:41:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 00:41:15 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]': finished
Nov 29 00:41:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:41:15.773 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:41:15 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:41:15.774 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 903 B/s rd, 142 KiB/s wr, 11 op/s
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2_40d5debf-b44f-4de6-940d-1c9aafb1724f", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2_40d5debf-b44f-4de6-940d-1c9aafb1724f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2_40d5debf-b44f-4de6-940d-1c9aafb1724f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:16 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:16 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:41:16.776 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/.meta.tmp'
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/.meta.tmp' to config b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/.meta'
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "format": "json"}]: dispatch
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:41:17 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 5 op/s
Nov 29 00:41:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 29 00:41:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 00:41:18 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "format": "json"}]: dispatch
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:18 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:18.299+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9378b5f8-f3c7-4db4-98d1-4cf3955df852' of type subvolume
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9378b5f8-f3c7-4db4-98d1-4cf3955df852' of type subvolume
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852'' moved to trashcan
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:18 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 00:41:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 29 00:41:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 00:41:19 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 29 00:41:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 170 KiB/s wr, 13 op/s
Nov 29 00:41:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 29 00:41:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 00:41:20 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 29 00:41:20 np0005539482 podman[270613]: 2025-11-29 05:41:20.666521985 +0000 UTC m=+0.079331434 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:41:20 np0005539482 podman[270613]: 2025-11-29 05:41:20.796695212 +0000 UTC m=+0.209504611 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 29 00:41:21 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:21.360+0000 7fa4c75e5640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f", "format": "json"}]: dispatch
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "format": "json"}]: dispatch
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:41:21 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:41:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 7 op/s
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:22 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0cae118e-67b1-4a60-b785-6679b650e472 does not exist
Nov 29 00:41:22 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 9eb7f337-a29f-439e-8c01-ef31460ed309 does not exist
Nov 29 00:41:22 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 26718b66-413d-45b8-852b-558290df7b71 does not exist
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:41:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:41:23 np0005539482 podman[271045]: 2025-11-29 05:41:23.383760085 +0000 UTC m=+0.060878409 container create 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:41:23 np0005539482 systemd[1]: Started libpod-conmon-69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1.scope.
Nov 29 00:41:23 np0005539482 podman[271045]: 2025-11-29 05:41:23.360025883 +0000 UTC m=+0.037144287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:41:23 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:41:23 np0005539482 podman[271045]: 2025-11-29 05:41:23.480490546 +0000 UTC m=+0.157608890 container init 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 00:41:23 np0005539482 podman[271045]: 2025-11-29 05:41:23.490308783 +0000 UTC m=+0.167427107 container start 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:41:23 np0005539482 podman[271045]: 2025-11-29 05:41:23.493629492 +0000 UTC m=+0.170747936 container attach 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 00:41:23 np0005539482 goofy_lederberg[271061]: 167 167
Nov 29 00:41:23 np0005539482 systemd[1]: libpod-69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1.scope: Deactivated successfully.
Nov 29 00:41:23 np0005539482 podman[271045]: 2025-11-29 05:41:23.498385727 +0000 UTC m=+0.175504061 container died 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:41:23 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4c71b7b8d8ea0fb9986090fb1259b219330c8053d144f160706f3d9395b79815-merged.mount: Deactivated successfully.
Nov 29 00:41:23 np0005539482 podman[271045]: 2025-11-29 05:41:23.535977673 +0000 UTC m=+0.213095997 container remove 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:41:23 np0005539482 systemd[1]: libpod-conmon-69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1.scope: Deactivated successfully.
Nov 29 00:41:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:41:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:41:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:23 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:41:23 np0005539482 podman[271085]: 2025-11-29 05:41:23.713330738 +0000 UTC m=+0.046786779 container create 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:41:23 np0005539482 systemd[1]: Started libpod-conmon-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope.
Nov 29 00:41:23 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:41:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:23 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:23 np0005539482 podman[271085]: 2025-11-29 05:41:23.692213729 +0000 UTC m=+0.025669850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:41:23 np0005539482 podman[271085]: 2025-11-29 05:41:23.794718889 +0000 UTC m=+0.128175040 container init 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:41:23 np0005539482 podman[271085]: 2025-11-29 05:41:23.80551413 +0000 UTC m=+0.138970211 container start 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:41:23 np0005539482 podman[271085]: 2025-11-29 05:41:23.809244489 +0000 UTC m=+0.142700550 container attach 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:41:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 7 op/s
Nov 29 00:41:24 np0005539482 quirky_brattain[271102]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:41:24 np0005539482 quirky_brattain[271102]: --> relative data size: 1.0
Nov 29 00:41:24 np0005539482 quirky_brattain[271102]: --> All data devices are unavailable
Nov 29 00:41:24 np0005539482 systemd[1]: libpod-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope: Deactivated successfully.
Nov 29 00:41:24 np0005539482 systemd[1]: libpod-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope: Consumed 1.096s CPU time.
Nov 29 00:41:24 np0005539482 podman[271085]: 2025-11-29 05:41:24.961935171 +0000 UTC m=+1.295391222 container died 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 00:41:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b-merged.mount: Deactivated successfully.
Nov 29 00:41:25 np0005539482 podman[271085]: 2025-11-29 05:41:25.00501881 +0000 UTC m=+1.338474841 container remove 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 00:41:25 np0005539482 systemd[1]: libpod-conmon-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope: Deactivated successfully.
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '28265ef5-ca45-4354-be2b-4e281fa424cd'
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/b0384108-1904-48a8-a8b3-3bb88d8155ec
Nov 29 00:41:25 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/b0384108-1904-48a8-a8b3-3bb88d8155ec],prefix=session evict} (starting...)
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "format": "json"}]: dispatch
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:41:25 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:41:25 np0005539482 podman[271287]: 2025-11-29 05:41:25.600791968 +0000 UTC m=+0.034564664 container create 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:41:25 np0005539482 systemd[1]: Started libpod-conmon-38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b.scope.
Nov 29 00:41:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:41:25 np0005539482 podman[271287]: 2025-11-29 05:41:25.670290964 +0000 UTC m=+0.104063680 container init 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:41:25 np0005539482 podman[271287]: 2025-11-29 05:41:25.67680762 +0000 UTC m=+0.110580316 container start 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:41:25 np0005539482 podman[271287]: 2025-11-29 05:41:25.680187662 +0000 UTC m=+0.113960378 container attach 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:41:25 np0005539482 ecstatic_brown[271304]: 167 167
Nov 29 00:41:25 np0005539482 podman[271287]: 2025-11-29 05:41:25.585436308 +0000 UTC m=+0.019209024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:41:25 np0005539482 systemd[1]: libpod-38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b.scope: Deactivated successfully.
Nov 29 00:41:25 np0005539482 podman[271287]: 2025-11-29 05:41:25.681705019 +0000 UTC m=+0.115477725 container died 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:41:25 np0005539482 systemd[1]: var-lib-containers-storage-overlay-03dea9f972729653c9ee72230a6bbd0297db961f3982f034360a12ef0623d0d1-merged.mount: Deactivated successfully.
Nov 29 00:41:25 np0005539482 podman[271287]: 2025-11-29 05:41:25.720281088 +0000 UTC m=+0.154053784 container remove 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:41:25 np0005539482 systemd[1]: libpod-conmon-38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b.scope: Deactivated successfully.
Nov 29 00:41:25 np0005539482 podman[271327]: 2025-11-29 05:41:25.92946349 +0000 UTC m=+0.049426273 container create b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:41:25 np0005539482 systemd[1]: Started libpod-conmon-b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282.scope.
Nov 29 00:41:25 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:41:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:25 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:26 np0005539482 podman[271327]: 2025-11-29 05:41:26.00620965 +0000 UTC m=+0.126172473 container init b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:41:26 np0005539482 podman[271327]: 2025-11-29 05:41:25.91497695 +0000 UTC m=+0.034939753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:41:26 np0005539482 podman[271327]: 2025-11-29 05:41:26.013754572 +0000 UTC m=+0.133717365 container start b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:41:26 np0005539482 podman[271327]: 2025-11-29 05:41:26.017316458 +0000 UTC m=+0.137279261 container attach b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:41:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 115 KiB/s wr, 9 op/s
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]: {
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:    "0": [
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:        {
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "devices": [
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "/dev/loop3"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            ],
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_name": "ceph_lv0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_size": "21470642176",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "name": "ceph_lv0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "tags": {
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cluster_name": "ceph",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.crush_device_class": "",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.encrypted": "0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osd_id": "0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.type": "block",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.vdo": "0"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            },
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "type": "block",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "vg_name": "ceph_vg0"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:        }
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:    ],
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:    "1": [
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:        {
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "devices": [
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "/dev/loop4"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            ],
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_name": "ceph_lv1",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_size": "21470642176",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "name": "ceph_lv1",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "tags": {
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cluster_name": "ceph",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.crush_device_class": "",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.encrypted": "0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osd_id": "1",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.type": "block",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.vdo": "0"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            },
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "type": "block",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "vg_name": "ceph_vg1"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:        }
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:    ],
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:    "2": [
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:        {
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "devices": [
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "/dev/loop5"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            ],
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_name": "ceph_lv2",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_size": "21470642176",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "name": "ceph_lv2",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "tags": {
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.cluster_name": "ceph",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.crush_device_class": "",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.encrypted": "0",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osd_id": "2",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.type": "block",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:                "ceph.vdo": "0"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            },
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "type": "block",
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:            "vg_name": "ceph_vg2"
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:        }
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]:    ]
Nov 29 00:41:26 np0005539482 romantic_varahamihira[271343]: }
Nov 29 00:41:26 np0005539482 systemd[1]: libpod-b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282.scope: Deactivated successfully.
Nov 29 00:41:26 np0005539482 podman[271327]: 2025-11-29 05:41:26.818057656 +0000 UTC m=+0.938020499 container died b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:41:26 np0005539482 systemd[1]: var-lib-containers-storage-overlay-1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505-merged.mount: Deactivated successfully.
Nov 29 00:41:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f_dea690a8-7401-442a-8d5e-63a333d20ef8", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f_dea690a8-7401-442a-8d5e-63a333d20ef8, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f_dea690a8-7401-442a-8d5e-63a333d20ef8, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:27 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:27 np0005539482 podman[271327]: 2025-11-29 05:41:27.423026177 +0000 UTC m=+1.542988960 container remove b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 00:41:27 np0005539482 systemd[1]: libpod-conmon-b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282.scope: Deactivated successfully.
Nov 29 00:41:27 np0005539482 podman[271382]: 2025-11-29 05:41:27.630049336 +0000 UTC m=+0.085078741 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 695 B/s rd, 102 KiB/s wr, 8 op/s
Nov 29 00:41:28 np0005539482 podman[271524]: 2025-11-29 05:41:28.13929326 +0000 UTC m=+0.047328692 container create 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 systemd[1]: Started libpod-conmon-93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b.scope.
Nov 29 00:41:28 np0005539482 podman[271524]: 2025-11-29 05:41:28.113906788 +0000 UTC m=+0.021942240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:41:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:41:28 np0005539482 podman[271524]: 2025-11-29 05:41:28.254102947 +0000 UTC m=+0.162138409 container init 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:41:28 np0005539482 podman[271524]: 2025-11-29 05:41:28.260544383 +0000 UTC m=+0.168579815 container start 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:41:28 np0005539482 loving_goodall[271541]: 167 167
Nov 29 00:41:28 np0005539482 systemd[1]: libpod-93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b.scope: Deactivated successfully.
Nov 29 00:41:28 np0005539482 podman[271524]: 2025-11-29 05:41:28.291168371 +0000 UTC m=+0.199203803 container attach 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:41:28 np0005539482 podman[271524]: 2025-11-29 05:41:28.291626252 +0000 UTC m=+0.199661674 container died 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:41:28 np0005539482 systemd[1]: var-lib-containers-storage-overlay-44255bd98b4c9d1565568c96d9ff4e8f7d2612a70bbbbc1dfbafbc203a31f816-merged.mount: Deactivated successfully.
Nov 29 00:41:28 np0005539482 podman[271524]: 2025-11-29 05:41:28.344979768 +0000 UTC m=+0.253015200 container remove 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:41:28 np0005539482 systemd[1]: libpod-conmon-93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b.scope: Deactivated successfully.
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 29 00:41:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 00:41:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) v1
Nov 29 00:41:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 29 00:41:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25
Nov 29 00:41:28 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25],prefix=session evict} (starting...)
Nov 29 00:41:28 np0005539482 podman[271566]: 2025-11-29 05:41:28.493765663 +0000 UTC m=+0.039744718 container create 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 systemd[1]: Started libpod-conmon-4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc.scope.
Nov 29 00:41:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:41:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:41:28 np0005539482 podman[271566]: 2025-11-29 05:41:28.569384346 +0000 UTC m=+0.115363421 container init 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:41:28 np0005539482 podman[271566]: 2025-11-29 05:41:28.477113412 +0000 UTC m=+0.023092487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:41:28 np0005539482 podman[271566]: 2025-11-29 05:41:28.579655014 +0000 UTC m=+0.125634059 container start 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:41:28 np0005539482 podman[271566]: 2025-11-29 05:41:28.582859601 +0000 UTC m=+0.128838666 container attach 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "target_sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, target_sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] tracking-id ec1326b5-a5e4-4d5f-8f2c-27b9bccec565 for path b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, target_sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab
Nov 29 00:41:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, a4fbeb19-4b4a-408e-8a0f-278794e0aaab)
Nov 29 00:41:29 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 00:41:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 00:41:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 29 00:41:29 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, a4fbeb19-4b4a-408e-8a0f-278794e0aaab) -- by 0 seconds
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 00:41:29 np0005539482 happy_noether[271584]: {
Nov 29 00:41:29 np0005539482 happy_noether[271584]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "osd_id": 0,
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "type": "bluestore"
Nov 29 00:41:29 np0005539482 happy_noether[271584]:    },
Nov 29 00:41:29 np0005539482 happy_noether[271584]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "osd_id": 1,
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "type": "bluestore"
Nov 29 00:41:29 np0005539482 happy_noether[271584]:    },
Nov 29 00:41:29 np0005539482 happy_noether[271584]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "osd_id": 2,
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:41:29 np0005539482 happy_noether[271584]:        "type": "bluestore"
Nov 29 00:41:29 np0005539482 happy_noether[271584]:    }
Nov 29 00:41:29 np0005539482 happy_noether[271584]: }
Nov 29 00:41:29 np0005539482 systemd[1]: libpod-4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc.scope: Deactivated successfully.
Nov 29 00:41:29 np0005539482 podman[271566]: 2025-11-29 05:41:29.555450092 +0000 UTC m=+1.101429187 container died 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:41:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4-merged.mount: Deactivated successfully.
Nov 29 00:41:29 np0005539482 podman[271566]: 2025-11-29 05:41:29.603247854 +0000 UTC m=+1.149226909 container remove 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:41:29 np0005539482 systemd[1]: libpod-conmon-4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc.scope: Deactivated successfully.
Nov 29 00:41:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:41:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:41:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f8658792-6b02-4ab8-95a2-65990a058386 does not exist
Nov 29 00:41:29 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 8703521b-8fdd-4efe-9e31-506a2e07e73b does not exist
Nov 29 00:41:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:30 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:41:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 111 KiB/s wr, 7 op/s
Nov 29 00:41:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:31 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.csskcz(active, since 32m)
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 96 KiB/s wr, 6 op/s
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4_70eab419-f284-47c7-b0cd-6e257fe57f1d", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4_70eab419-f284-47c7-b0cd-6e257fe57f1d, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.snap/f919bca8-f41c-47b0-8fca-f8f7988969c2/0b15d7c5-c29f-491e-8e79-ff980dbb8d2d' to b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/a159fbfa-75c1-4d65-9295-73f51ae6b10d'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4_70eab419-f284-47c7-b0cd-6e257fe57f1d, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] untracking ec1326b5-a5e4-4d5f-8f2c-27b9bccec565
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 00:41:32 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, a4fbeb19-4b4a-408e-8a0f-278794e0aaab)
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "format": "json"}]: dispatch
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:28265ef5-ca45-4354-be2b-4e281fa424cd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:28265ef5-ca45-4354-be2b-4e281fa424cd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '28265ef5-ca45-4354-be2b-4e281fa424cd' of type subvolume
Nov 29 00:41:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:33.033+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '28265ef5-ca45-4354-be2b-4e281fa424cd' of type subvolume
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 29 00:41:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd'' moved to trashcan
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 00:41:33 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 29 00:41:33 np0005539482 podman[271705]: 2025-11-29 05:41:33.117065673 +0000 UTC m=+0.160934100 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 00:41:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 29 00:41:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 00:41:34 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 29 00:41:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 6 op/s
Nov 29 00:41:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "format": "json"}]: dispatch
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:07a65cd4-2777-43ad-b684-b3508a87dd10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:07a65cd4-2777-43ad-b684-b3508a87dd10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07a65cd4-2777-43ad-b684-b3508a87dd10' of type subvolume
Nov 29 00:41:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:35.737+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07a65cd4-2777-43ad-b684-b3508a87dd10' of type subvolume
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10'' moved to trashcan
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "format": "json"}]: dispatch
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb7c7b44-2af1-44fc-8694-006120ff8320, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb7c7b44-2af1-44fc-8694-006120ff8320, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb7c7b44-2af1-44fc-8694-006120ff8320' of type subvolume
Nov 29 00:41:35 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:35.898+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb7c7b44-2af1-44fc-8694-006120ff8320' of type subvolume
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320'' moved to trashcan
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:35 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 00:41:35 np0005539482 nova_compute[254898]: 2025-11-29 05:41:35.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 157 KiB/s wr, 12 op/s
Nov 29 00:41:36 np0005539482 nova_compute[254898]: 2025-11-29 05:41:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:36 np0005539482 nova_compute[254898]: 2025-11-29 05:41:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:36 np0005539482 nova_compute[254898]: 2025-11-29 05:41:36.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:36 np0005539482 nova_compute[254898]: 2025-11-29 05:41:36.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:36 np0005539482 nova_compute[254898]: 2025-11-29 05:41:36.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:41:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 69 KiB/s wr, 5 op/s
Nov 29 00:41:38 np0005539482 nova_compute[254898]: 2025-11-29 05:41:38.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:38 np0005539482 nova_compute[254898]: 2025-11-29 05:41:38.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:38 np0005539482 nova_compute[254898]: 2025-11-29 05:41:38.996 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:41:38 np0005539482 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:41:38 np0005539482 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:41:38 np0005539482 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:41:38 np0005539482 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "format": "json"}]: dispatch
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a0e01f60-977a-4212-be2c-851b3318eb22, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a0e01f60-977a-4212-be2c-851b3318eb22, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0e01f60-977a-4212-be2c-851b3318eb22' of type subvolume
Nov 29 00:41:39 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:39.339+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0e01f60-977a-4212-be2c-851b3318eb22' of type subvolume
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22'' moved to trashcan
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:39 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 00:41:39 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:41:39 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839332280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.473 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.648 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.649 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5033MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.649 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.649 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.721 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.721 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:41:39 np0005539482 nova_compute[254898]: 2025-11-29 05:41:39.738 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:41:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 129 KiB/s wr, 10 op/s
Nov 29 00:41:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:41:40 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182515748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:41:40 np0005539482 nova_compute[254898]: 2025-11-29 05:41:40.209 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:41:40 np0005539482 nova_compute[254898]: 2025-11-29 05:41:40.215 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:41:40 np0005539482 nova_compute[254898]: 2025-11-29 05:41:40.234 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:41:40 np0005539482 nova_compute[254898]: 2025-11-29 05:41:40.235 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:41:40 np0005539482 nova_compute[254898]: 2025-11-29 05:41:40.236 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:41:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 29 00:41:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 00:41:40 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:41:41
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:41:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:41:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 128 KiB/s wr, 10 op/s
Nov 29 00:41:42 np0005539482 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:42 np0005539482 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:41:42 np0005539482 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:41:42 np0005539482 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:41:42 np0005539482 nova_compute[254898]: 2025-11-29 05:41:42.244 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:41:43 np0005539482 podman[271776]: 2025-11-29 05:41:43.045238475 +0000 UTC m=+0.089517939 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "admin", "format": "json"}]: dispatch
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 29 00:41:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:43.115+0000 7fa4c75e5640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "format": "json"}]: dispatch
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:873c8599-1b6c-425f-8c5c-0a211fc50713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:873c8599-1b6c-425f-8c5c-0a211fc50713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:41:43 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:43.223+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '873c8599-1b6c-425f-8c5c-0a211fc50713' of type subvolume
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '873c8599-1b6c-425f-8c5c-0a211fc50713' of type subvolume
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "force": true, "format": "json"}]: dispatch
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713'' moved to trashcan
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:41:43 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 00:41:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 103 KiB/s wr, 8 op/s
Nov 29 00:41:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 78 KiB/s wr, 5 op/s
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.237882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906237959, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1517, "num_deletes": 257, "total_data_size": 2136129, "memory_usage": 2171560, "flush_reason": "Manual Compaction"}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906256469, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2102066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24729, "largest_seqno": 26245, "table_properties": {"data_size": 2094699, "index_size": 4184, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17657, "raw_average_key_size": 21, "raw_value_size": 2079122, "raw_average_value_size": 2514, "num_data_blocks": 186, "num_entries": 827, "num_filter_entries": 827, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394820, "oldest_key_time": 1764394820, "file_creation_time": 1764394906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18635 microseconds, and 10384 cpu microseconds.
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.256529) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2102066 bytes OK
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.256553) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.258429) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.258451) EVENT_LOG_v1 {"time_micros": 1764394906258444, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.258475) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2128889, prev total WAL file size 2128889, number of live WAL files 2.
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.259709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2052KB)], [56(9707KB)]
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906259763, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12042553, "oldest_snapshot_seqno": -1}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5622 keys, 10213474 bytes, temperature: kUnknown
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906347476, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10213474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10172030, "index_size": 26294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 140044, "raw_average_key_size": 24, "raw_value_size": 10067339, "raw_average_value_size": 1790, "num_data_blocks": 1092, "num_entries": 5622, "num_filter_entries": 5622, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.347805) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10213474 bytes
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.349495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.1 rd, 116.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 6154, records dropped: 532 output_compression: NoCompression
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.349518) EVENT_LOG_v1 {"time_micros": 1764394906349506, "job": 30, "event": "compaction_finished", "compaction_time_micros": 87842, "compaction_time_cpu_micros": 45060, "output_level": 6, "num_output_files": 1, "total_output_size": 10213474, "num_input_records": 6154, "num_output_records": 5622, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906350130, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906352558, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.259583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:41:46 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:41:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 78 KiB/s wr, 5 op/s
Nov 29 00:41:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 3 op/s
Nov 29 00:41:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000447614992348766 of space, bias 4.0, pg target 0.5371379908185192 quantized to 16 (current 16)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:41:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:41:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 32 KiB/s wr, 2 op/s
Nov 29 00:41:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 2 op/s
Nov 29 00:41:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:41:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 2 op/s
Nov 29 00:41:58 np0005539482 podman[271800]: 2025-11-29 05:41:58.012538221 +0000 UTC m=+0.061020522 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 00:41:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 0 op/s
Nov 29 00:42:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 0 op/s
Nov 29 00:42:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 00:42:04 np0005539482 podman[271821]: 2025-11-29 05:42:04.068187103 +0000 UTC m=+0.106237681 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 00:42:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 00:42:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 00:42:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:42:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:42:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:42:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:42:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:42:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:42:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:42:13.758 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:42:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:42:13.758 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:42:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:42:13.759 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:42:14 np0005539482 podman[271847]: 2025-11-29 05:42:14.002402666 +0000 UTC m=+0.052500676 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 00:42:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:42:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120545269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:42:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:42:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120545269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:42:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:19 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 00:42:19 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:22 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 00:42:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 00:42:22 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 00:42:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:42:22 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:42:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:42:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.7 KiB/s wr, 0 op/s
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116/.meta.tmp'
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116/.meta.tmp' to config b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116/.meta'
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "format": "json"}]: dispatch
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:26 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:42:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:42:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.7 KiB/s wr, 0 op/s
Nov 29 00:42:29 np0005539482 podman[271868]: 2025-11-29 05:42:29.042728741 +0000 UTC m=+0.084920797 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 00:42:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 00:42:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 00:42:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:30 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:42:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 98b4dba3-f3de-4000-807e-1d794b2848c4 does not exist
Nov 29 00:42:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 57f9961f-c742-4a5d-9361-45f1f9e0fabc does not exist
Nov 29 00:42:30 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 19791311-c2a5-4edd-8093-d62722ec746e does not exist
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:42:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:42:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:42:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:42:31 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:42:31 np0005539482 podman[272161]: 2025-11-29 05:42:31.345078072 +0000 UTC m=+0.048487610 container create 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:42:31 np0005539482 systemd[1]: Started libpod-conmon-02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77.scope.
Nov 29 00:42:31 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:42:31 np0005539482 podman[272161]: 2025-11-29 05:42:31.321686378 +0000 UTC m=+0.025095926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:42:31 np0005539482 podman[272161]: 2025-11-29 05:42:31.422617291 +0000 UTC m=+0.126026809 container init 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:42:31 np0005539482 podman[272161]: 2025-11-29 05:42:31.431374141 +0000 UTC m=+0.134783629 container start 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:42:31 np0005539482 podman[272161]: 2025-11-29 05:42:31.434530617 +0000 UTC m=+0.137940135 container attach 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:42:31 np0005539482 lucid_fermi[272177]: 167 167
Nov 29 00:42:31 np0005539482 systemd[1]: libpod-02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77.scope: Deactivated successfully.
Nov 29 00:42:31 np0005539482 podman[272161]: 2025-11-29 05:42:31.441233569 +0000 UTC m=+0.144643107 container died 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 00:42:31 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c84975bf8393862dae94e172a3535358cf593a08818aeeb866fb749f168563c8-merged.mount: Deactivated successfully.
Nov 29 00:42:31 np0005539482 podman[272161]: 2025-11-29 05:42:31.491452889 +0000 UTC m=+0.194862387 container remove 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:42:31 np0005539482 systemd[1]: libpod-conmon-02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77.scope: Deactivated successfully.
Nov 29 00:42:31 np0005539482 podman[272200]: 2025-11-29 05:42:31.664377808 +0000 UTC m=+0.042053415 container create c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:42:31 np0005539482 systemd[1]: Started libpod-conmon-c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0.scope.
Nov 29 00:42:31 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:42:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:31 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:31 np0005539482 podman[272200]: 2025-11-29 05:42:31.646588519 +0000 UTC m=+0.024264146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:42:31 np0005539482 podman[272200]: 2025-11-29 05:42:31.748976866 +0000 UTC m=+0.126652493 container init c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:42:31 np0005539482 podman[272200]: 2025-11-29 05:42:31.755553365 +0000 UTC m=+0.133228972 container start c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:42:31 np0005539482 podman[272200]: 2025-11-29 05:42:31.758759262 +0000 UTC m=+0.136434869 container attach c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:42:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 00:42:32 np0005539482 beautiful_booth[272216]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:42:32 np0005539482 beautiful_booth[272216]: --> relative data size: 1.0
Nov 29 00:42:32 np0005539482 beautiful_booth[272216]: --> All data devices are unavailable
Nov 29 00:42:32 np0005539482 systemd[1]: libpod-c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0.scope: Deactivated successfully.
Nov 29 00:42:32 np0005539482 podman[272200]: 2025-11-29 05:42:32.756032148 +0000 UTC m=+1.133707755 container died c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:42:32 np0005539482 systemd[1]: var-lib-containers-storage-overlay-082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246-merged.mount: Deactivated successfully.
Nov 29 00:42:32 np0005539482 podman[272200]: 2025-11-29 05:42:32.808457692 +0000 UTC m=+1.186133299 container remove c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:42:32 np0005539482 systemd[1]: libpod-conmon-c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0.scope: Deactivated successfully.
Nov 29 00:42:33 np0005539482 podman[272402]: 2025-11-29 05:42:33.338954279 +0000 UTC m=+0.037788872 container create 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:42:33 np0005539482 systemd[1]: Started libpod-conmon-307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb.scope.
Nov 29 00:42:33 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:42:33 np0005539482 podman[272402]: 2025-11-29 05:42:33.414644123 +0000 UTC m=+0.113478816 container init 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:42:33 np0005539482 podman[272402]: 2025-11-29 05:42:33.321641921 +0000 UTC m=+0.020476524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:42:33 np0005539482 podman[272402]: 2025-11-29 05:42:33.420924064 +0000 UTC m=+0.119758667 container start 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 00:42:33 np0005539482 podman[272402]: 2025-11-29 05:42:33.427592825 +0000 UTC m=+0.126427438 container attach 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:42:33 np0005539482 vigilant_lederberg[272418]: 167 167
Nov 29 00:42:33 np0005539482 systemd[1]: libpod-307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb.scope: Deactivated successfully.
Nov 29 00:42:33 np0005539482 podman[272402]: 2025-11-29 05:42:33.430050174 +0000 UTC m=+0.128884777 container died 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:42:33 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b7e4c9e0de5ab84ad49616b98bfbf65a565e3b51793149d219e188e0cdda53bb-merged.mount: Deactivated successfully.
Nov 29 00:42:33 np0005539482 podman[272402]: 2025-11-29 05:42:33.462578438 +0000 UTC m=+0.161413031 container remove 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:42:33 np0005539482 systemd[1]: libpod-conmon-307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb.scope: Deactivated successfully.
Nov 29 00:42:33 np0005539482 podman[272442]: 2025-11-29 05:42:33.649307568 +0000 UTC m=+0.034364809 container create a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "format": "json"}]: dispatch
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4d6476ad-1951-44f5-839b-0b3b554d9116, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:33 np0005539482 systemd[1]: Started libpod-conmon-a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821.scope.
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4d6476ad-1951-44f5-839b-0b3b554d9116, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d6476ad-1951-44f5-839b-0b3b554d9116' of type subvolume
Nov 29 00:42:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:33.686+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d6476ad-1951-44f5-839b-0b3b554d9116' of type subvolume
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "force": true, "format": "json"}]: dispatch
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116'' moved to trashcan
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:42:33 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 00:42:33 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:42:33 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:33 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:33 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:33 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:33 np0005539482 podman[272442]: 2025-11-29 05:42:33.728014085 +0000 UTC m=+0.113071326 container init a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:42:33 np0005539482 podman[272442]: 2025-11-29 05:42:33.634313637 +0000 UTC m=+0.019370898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:42:33 np0005539482 podman[272442]: 2025-11-29 05:42:33.7344286 +0000 UTC m=+0.119485841 container start a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:42:33 np0005539482 podman[272442]: 2025-11-29 05:42:33.737288279 +0000 UTC m=+0.122345520 container attach a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:42:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]: {
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:    "0": [
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:        {
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "devices": [
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "/dev/loop3"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            ],
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_name": "ceph_lv0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_size": "21470642176",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "name": "ceph_lv0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "tags": {
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cluster_name": "ceph",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.crush_device_class": "",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.encrypted": "0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osd_id": "0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.type": "block",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.vdo": "0"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            },
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "type": "block",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "vg_name": "ceph_vg0"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:        }
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:    ],
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:    "1": [
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:        {
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "devices": [
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "/dev/loop4"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            ],
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_name": "ceph_lv1",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_size": "21470642176",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "name": "ceph_lv1",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "tags": {
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cluster_name": "ceph",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.crush_device_class": "",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.encrypted": "0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osd_id": "1",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.type": "block",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.vdo": "0"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            },
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "type": "block",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "vg_name": "ceph_vg1"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:        }
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:    ],
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:    "2": [
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:        {
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "devices": [
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "/dev/loop5"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            ],
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_name": "ceph_lv2",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_size": "21470642176",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "name": "ceph_lv2",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "tags": {
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.cluster_name": "ceph",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.crush_device_class": "",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.encrypted": "0",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osd_id": "2",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.type": "block",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:                "ceph.vdo": "0"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            },
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "type": "block",
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:            "vg_name": "ceph_vg2"
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:        }
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]:    ]
Nov 29 00:42:34 np0005539482 epic_rhodes[272458]: }
Nov 29 00:42:34 np0005539482 systemd[1]: libpod-a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821.scope: Deactivated successfully.
Nov 29 00:42:34 np0005539482 podman[272442]: 2025-11-29 05:42:34.448740446 +0000 UTC m=+0.833797697 container died a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:42:34 np0005539482 systemd[1]: var-lib-containers-storage-overlay-982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51-merged.mount: Deactivated successfully.
Nov 29 00:42:34 np0005539482 podman[272442]: 2025-11-29 05:42:34.502440971 +0000 UTC m=+0.887498212 container remove a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:42:34 np0005539482 systemd[1]: libpod-conmon-a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821.scope: Deactivated successfully.
Nov 29 00:42:34 np0005539482 podman[272468]: 2025-11-29 05:42:34.616857387 +0000 UTC m=+0.137978985 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:42:35 np0005539482 podman[272647]: 2025-11-29 05:42:35.07583057 +0000 UTC m=+0.040459467 container create 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:42:35 np0005539482 systemd[1]: Started libpod-conmon-88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763.scope.
Nov 29 00:42:35 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:42:35 np0005539482 podman[272647]: 2025-11-29 05:42:35.148119622 +0000 UTC m=+0.112748549 container init 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:42:35 np0005539482 podman[272647]: 2025-11-29 05:42:35.056105894 +0000 UTC m=+0.020734841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:42:35 np0005539482 podman[272647]: 2025-11-29 05:42:35.154543057 +0000 UTC m=+0.119171994 container start 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:42:35 np0005539482 podman[272647]: 2025-11-29 05:42:35.158163724 +0000 UTC m=+0.122792641 container attach 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:42:35 np0005539482 reverent_driscoll[272663]: 167 167
Nov 29 00:42:35 np0005539482 podman[272647]: 2025-11-29 05:42:35.159923506 +0000 UTC m=+0.124552403 container died 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:42:35 np0005539482 systemd[1]: libpod-88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763.scope: Deactivated successfully.
Nov 29 00:42:35 np0005539482 systemd[1]: var-lib-containers-storage-overlay-355964eda43185c0df3277e66a3c4e5ae455f47ecddfe8ef9269a13a08ad6541-merged.mount: Deactivated successfully.
Nov 29 00:42:35 np0005539482 podman[272647]: 2025-11-29 05:42:35.189697534 +0000 UTC m=+0.154326431 container remove 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:42:35 np0005539482 systemd[1]: libpod-conmon-88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763.scope: Deactivated successfully.
Nov 29 00:42:35 np0005539482 podman[272685]: 2025-11-29 05:42:35.362912529 +0000 UTC m=+0.036653464 container create 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:42:35 np0005539482 systemd[1]: Started libpod-conmon-900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9.scope.
Nov 29 00:42:35 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:42:35 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:35 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:35 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:35 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:42:35 np0005539482 podman[272685]: 2025-11-29 05:42:35.429158535 +0000 UTC m=+0.102899500 container init 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:42:35 np0005539482 podman[272685]: 2025-11-29 05:42:35.438100361 +0000 UTC m=+0.111841306 container start 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:42:35 np0005539482 podman[272685]: 2025-11-29 05:42:35.440905109 +0000 UTC m=+0.114646054 container attach 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:42:35 np0005539482 podman[272685]: 2025-11-29 05:42:35.347956108 +0000 UTC m=+0.021697073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:42:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:35 np0005539482 nova_compute[254898]: 2025-11-29 05:42:35.961 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 44 KiB/s wr, 2 op/s
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]: {
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "osd_id": 0,
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "type": "bluestore"
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:    },
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "osd_id": 1,
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "type": "bluestore"
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:    },
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "osd_id": 2,
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:        "type": "bluestore"
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]:    }
Nov 29 00:42:36 np0005539482 elastic_babbage[272701]: }
Nov 29 00:42:36 np0005539482 systemd[1]: libpod-900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9.scope: Deactivated successfully.
Nov 29 00:42:36 np0005539482 podman[272734]: 2025-11-29 05:42:36.360514603 +0000 UTC m=+0.021469289 container died 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:42:36 np0005539482 systemd[1]: var-lib-containers-storage-overlay-20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d-merged.mount: Deactivated successfully.
Nov 29 00:42:36 np0005539482 podman[272734]: 2025-11-29 05:42:36.409007171 +0000 UTC m=+0.069961837 container remove 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:42:36 np0005539482 systemd[1]: libpod-conmon-900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9.scope: Deactivated successfully.
Nov 29 00:42:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:42:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:42:36 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:42:36 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:42:36 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 95e72e6f-30ba-423b-bad3-6ea9c7019ab8 does not exist
Nov 29 00:42:36 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a53babfb-a8c2-4810-bfa0-ffe1d4e68eb3 does not exist
Nov 29 00:42:36 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:42:36 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:42:36 np0005539482 nova_compute[254898]: 2025-11-29 05:42:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:37 np0005539482 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:37 np0005539482 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:37 np0005539482 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:37 np0005539482 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:37 np0005539482 nova_compute[254898]: 2025-11-29 05:42:37.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:42:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 1 op/s
Nov 29 00:42:38 np0005539482 nova_compute[254898]: 2025-11-29 05:42:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 2 op/s
Nov 29 00:42:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:40 np0005539482 nova_compute[254898]: 2025-11-29 05:42:40.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:40 np0005539482 nova_compute[254898]: 2025-11-29 05:42:40.978 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:42:40 np0005539482 nova_compute[254898]: 2025-11-29 05:42:40.979 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:42:40 np0005539482 nova_compute[254898]: 2025-11-29 05:42:40.979 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:42:40 np0005539482 nova_compute[254898]: 2025-11-29 05:42:40.980 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:42:40 np0005539482 nova_compute[254898]: 2025-11-29 05:42:40.980 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:42:41
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:42:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:42:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857270262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.394 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f97ef850>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f96d7fa0>)]
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.526 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.527 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5036MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.527 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.528 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.593 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.593 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:42:41 np0005539482 nova_compute[254898]: 2025-11-29 05:42:41.615 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97/.meta.tmp'
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97/.meta.tmp' to config b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97/.meta'
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "format": "json"}]: dispatch
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:42:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 00:42:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:42:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211080453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:42:42 np0005539482 nova_compute[254898]: 2025-11-29 05:42:42.030 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:42:42 np0005539482 nova_compute[254898]: 2025-11-29 05:42:42.035 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:42:42 np0005539482 nova_compute[254898]: 2025-11-29 05:42:42.050 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:42:42 np0005539482 nova_compute[254898]: 2025-11-29 05:42:42.051 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:42:42 np0005539482 nova_compute[254898]: 2025-11-29 05:42:42.052 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 1 op/s
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp'
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp' to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta'
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "format": "json"}]: dispatch
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:42 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:42:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:42:43 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.csskcz(active, since 34m)
Nov 29 00:42:44 np0005539482 nova_compute[254898]: 2025-11-29 05:42:44.050 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:44 np0005539482 nova_compute[254898]: 2025-11-29 05:42:44.050 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:42:44 np0005539482 nova_compute[254898]: 2025-11-29 05:42:44.050 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:42:44 np0005539482 nova_compute[254898]: 2025-11-29 05:42:44.051 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:42:44 np0005539482 nova_compute[254898]: 2025-11-29 05:42:44.093 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:42:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 1 op/s
Nov 29 00:42:44 np0005539482 podman[272843]: 2025-11-29 05:42:44.99874577 +0000 UTC m=+0.052123127 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 00:42:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Nov 29 00:42:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8", "format": "json"}]: dispatch
Nov 29 00:42:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 58 KiB/s wr, 3 op/s
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 31 KiB/s wr, 2 op/s
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "format": "json"}]: dispatch
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98efc0d9-c20a-4e7b-a016-a71069116a97, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98efc0d9-c20a-4e7b-a016-a71069116a97, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:48 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:48.621+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98efc0d9-c20a-4e7b-a016-a71069116a97' of type subvolume
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98efc0d9-c20a-4e7b-a016-a71069116a97' of type subvolume
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "force": true, "format": "json"}]: dispatch
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97'' moved to trashcan
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:42:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8_b2c2a3a9-6ca2-47e7-866b-066e22d44cab", "force": true, "format": "json"}]: dispatch
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8_b2c2a3a9-6ca2-47e7-866b-066e22d44cab, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp'
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp' to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta'
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8_b2c2a3a9-6ca2-47e7-866b-066e22d44cab, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8", "force": true, "format": "json"}]: dispatch
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp'
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp' to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta'
Nov 29 00:42:50 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004662470697401836 of space, bias 4.0, pg target 0.5594964836882204 quantized to 16 (current 16)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:42:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0/.meta.tmp'
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0/.meta.tmp' to config b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0/.meta'
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "format": "json"}]: dispatch
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 2 op/s
Nov 29 00:42:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 00:42:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:42:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:42:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 29 00:42:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 00:42:53 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "format": "json"}]: dispatch
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:53 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:53.808+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb848b69-a318-4691-8a4b-5a72fc808dc6' of type subvolume
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb848b69-a318-4691-8a4b-5a72fc808dc6' of type subvolume
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "force": true, "format": "json"}]: dispatch
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6'' moved to trashcan
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:42:53 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 00:42:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 3 op/s
Nov 29 00:42:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 75 KiB/s wr, 4 op/s
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "format": "json"}]: dispatch
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:42:56 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:56.866+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1e617bb-f4ed-4cc8-b966-2d95665d32f0' of type subvolume
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1e617bb-f4ed-4cc8-b966-2d95665d32f0' of type subvolume
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "force": true, "format": "json"}]: dispatch
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0'' moved to trashcan
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:42:56 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 00:42:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 75 KiB/s wr, 4 op/s
Nov 29 00:42:59 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:42:59.804 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:42:59 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:42:59.805 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:42:59 np0005539482 podman[272862]: 2025-11-29 05:42:59.998690845 +0000 UTC m=+0.053441768 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 84 KiB/s wr, 5 op/s
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "force": true, "format": "json"}]: dispatch
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab'' moved to trashcan
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:43:00 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 00:43:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:00 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:43:00.806 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:43:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 84 KiB/s wr, 5 op/s
Nov 29 00:43:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 765 B/s rd, 79 KiB/s wr, 5 op/s
Nov 29 00:43:05 np0005539482 podman[272883]: 2025-11-29 05:43:05.069448269 +0000 UTC m=+0.111335835 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 00:43:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 29 00:43:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2_dff066e4-ae85-4050-9c93-143b245e669b", "force": true, "format": "json"}]: dispatch
Nov 29 00:43:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2_dff066e4-ae85-4050-9c93-143b245e669b, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:43:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 00:43:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 81 KiB/s wr, 5 op/s
Nov 29 00:43:06 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2_dff066e4-ae85-4050-9c93-143b245e669b, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "force": true, "format": "json"}]: dispatch
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 00:43:07 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:43:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 2 op/s
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "format": "json"}]: dispatch
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dca14011-a433-40d4-8754-3eaafbae5faa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dca14011-a433-40d4-8754-3eaafbae5faa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:09 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:09.781+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dca14011-a433-40d4-8754-3eaafbae5faa' of type subvolume
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dca14011-a433-40d4-8754-3eaafbae5faa' of type subvolume
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "force": true, "format": "json"}]: dispatch
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa'' moved to trashcan
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:43:09 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 00:43:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Nov 29 00:43:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:43:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:43:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:43:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:43:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:43:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:43:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Nov 29 00:43:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 29 00:43:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 00:43:13 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 29 00:43:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:43:13.759 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:43:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:43:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:43:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:43:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:43:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 41 KiB/s wr, 3 op/s
Nov 29 00:43:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:43:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112252909' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:43:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:43:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112252909' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:43:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:16 np0005539482 podman[272910]: 2025-11-29 05:43:16.039646497 +0000 UTC m=+0.083578586 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 00:43:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 526 B/s rd, 63 KiB/s wr, 4 op/s
Nov 29 00:43:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Nov 29 00:43:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Nov 29 00:43:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 29 00:43:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 00:43:20 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 29 00:43:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 228 B/s rd, 46 KiB/s wr, 2 op/s
Nov 29 00:43:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Nov 29 00:43:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp'
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp' to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta'
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "format": "json"}]: dispatch
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:43:28 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:43:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:43:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:43:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Nov 29 00:43:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:31 np0005539482 podman[272929]: 2025-11-29 05:43:31.002021477 +0000 UTC m=+0.055776924 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:43:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341", "format": "json"}]: dispatch
Nov 29 00:43:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:43:31 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:43:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Nov 29 00:43:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Nov 29 00:43:35 np0005539482 podman[272953]: 2025-11-29 05:43:35.22837018 +0000 UTC m=+0.063060841 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 00:43:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 5b11db75-1ad1-41c5-9f18-aca8b756710d does not exist
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ac786599-6ce3-4b0d-b976-b6d9a865a6cf does not exist
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev c08f1189-f624-4a11-85d7-47614fc4d6ef does not exist
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:43:37 np0005539482 podman[273253]: 2025-11-29 05:43:37.803665029 +0000 UTC m=+0.046274157 container create a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976/.meta.tmp'
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976/.meta.tmp' to config b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976/.meta'
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "format": "json"}]: dispatch
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 00:43:37 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:43:37 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:43:37 np0005539482 systemd[1]: Started libpod-conmon-a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4.scope.
Nov 29 00:43:37 np0005539482 podman[273253]: 2025-11-29 05:43:37.782770885 +0000 UTC m=+0.025380103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:43:37 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:43:37 np0005539482 podman[273253]: 2025-11-29 05:43:37.902926611 +0000 UTC m=+0.145535769 container init a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:43:37 np0005539482 podman[273253]: 2025-11-29 05:43:37.911651531 +0000 UTC m=+0.154260669 container start a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:43:37 np0005539482 podman[273253]: 2025-11-29 05:43:37.91489865 +0000 UTC m=+0.157507778 container attach a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:43:37 np0005539482 heuristic_hamilton[273269]: 167 167
Nov 29 00:43:37 np0005539482 systemd[1]: libpod-a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4.scope: Deactivated successfully.
Nov 29 00:43:37 np0005539482 podman[273253]: 2025-11-29 05:43:37.919626174 +0000 UTC m=+0.162235302 container died a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:43:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3a6a9898aee52f0f158b1dc6ded98f9833c88ec10adc41326f34214d63e96572-merged.mount: Deactivated successfully.
Nov 29 00:43:37 np0005539482 nova_compute[254898]: 2025-11-29 05:43:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:37 np0005539482 nova_compute[254898]: 2025-11-29 05:43:37.956 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:37 np0005539482 podman[273253]: 2025-11-29 05:43:37.95681094 +0000 UTC m=+0.199420068 container remove a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:43:37 np0005539482 systemd[1]: libpod-conmon-a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4.scope: Deactivated successfully.
Nov 29 00:43:38 np0005539482 podman[273293]: 2025-11-29 05:43:38.132719439 +0000 UTC m=+0.043259203 container create c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 00:43:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Nov 29 00:43:38 np0005539482 systemd[1]: Started libpod-conmon-c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b.scope.
Nov 29 00:43:38 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:43:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:38 np0005539482 podman[273293]: 2025-11-29 05:43:38.115490014 +0000 UTC m=+0.026029798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:43:38 np0005539482 podman[273293]: 2025-11-29 05:43:38.210259149 +0000 UTC m=+0.120798963 container init c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:43:38 np0005539482 podman[273293]: 2025-11-29 05:43:38.227640767 +0000 UTC m=+0.138180531 container start c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:43:38 np0005539482 podman[273293]: 2025-11-29 05:43:38.230890275 +0000 UTC m=+0.141430029 container attach c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:43:38 np0005539482 nova_compute[254898]: 2025-11-29 05:43:38.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:38 np0005539482 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:38 np0005539482 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:38 np0005539482 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:43:38 np0005539482 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:38 np0005539482 nova_compute[254898]: 2025-11-29 05:43:38.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 00:43:38 np0005539482 nova_compute[254898]: 2025-11-29 05:43:38.970 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:39 np0005539482 hardcore_bassi[273309]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:43:39 np0005539482 hardcore_bassi[273309]: --> relative data size: 1.0
Nov 29 00:43:39 np0005539482 hardcore_bassi[273309]: --> All data devices are unavailable
Nov 29 00:43:39 np0005539482 systemd[1]: libpod-c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b.scope: Deactivated successfully.
Nov 29 00:43:39 np0005539482 podman[273293]: 2025-11-29 05:43:39.275728228 +0000 UTC m=+1.186268022 container died c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:43:39 np0005539482 systemd[1]: var-lib-containers-storage-overlay-002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0-merged.mount: Deactivated successfully.
Nov 29 00:43:39 np0005539482 podman[273293]: 2025-11-29 05:43:39.3347186 +0000 UTC m=+1.245258364 container remove c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:43:39 np0005539482 systemd[1]: libpod-conmon-c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b.scope: Deactivated successfully.
Nov 29 00:43:39 np0005539482 nova_compute[254898]: 2025-11-29 05:43:39.990 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:40 np0005539482 podman[273491]: 2025-11-29 05:43:40.035595782 +0000 UTC m=+0.038169911 container create 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:43:40 np0005539482 systemd[1]: Started libpod-conmon-79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc.scope.
Nov 29 00:43:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:43:40 np0005539482 podman[273491]: 2025-11-29 05:43:40.108328225 +0000 UTC m=+0.110902334 container init 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:43:40 np0005539482 podman[273491]: 2025-11-29 05:43:40.017417853 +0000 UTC m=+0.019991962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:43:40 np0005539482 podman[273491]: 2025-11-29 05:43:40.114621487 +0000 UTC m=+0.117195606 container start 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:43:40 np0005539482 podman[273491]: 2025-11-29 05:43:40.11765422 +0000 UTC m=+0.120228379 container attach 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:43:40 np0005539482 happy_keldysh[273507]: 167 167
Nov 29 00:43:40 np0005539482 systemd[1]: libpod-79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc.scope: Deactivated successfully.
Nov 29 00:43:40 np0005539482 podman[273491]: 2025-11-29 05:43:40.119543525 +0000 UTC m=+0.122117634 container died 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:43:40 np0005539482 systemd[1]: var-lib-containers-storage-overlay-5b545dc7f1fd1f8602b8cd8648f8518b3046cff54700882f5837176747cafb2c-merged.mount: Deactivated successfully.
Nov 29 00:43:40 np0005539482 podman[273491]: 2025-11-29 05:43:40.15540145 +0000 UTC m=+0.157975559 container remove 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:43:40 np0005539482 systemd[1]: libpod-conmon-79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc.scope: Deactivated successfully.
Nov 29 00:43:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 2 op/s
Nov 29 00:43:40 np0005539482 podman[273531]: 2025-11-29 05:43:40.298430977 +0000 UTC m=+0.037767682 container create 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 00:43:40 np0005539482 systemd[1]: Started libpod-conmon-246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2.scope.
Nov 29 00:43:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:43:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:40 np0005539482 podman[273531]: 2025-11-29 05:43:40.366557619 +0000 UTC m=+0.105894334 container init 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 00:43:40 np0005539482 podman[273531]: 2025-11-29 05:43:40.374959471 +0000 UTC m=+0.114296176 container start 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:43:40 np0005539482 podman[273531]: 2025-11-29 05:43:40.282410351 +0000 UTC m=+0.021747066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:43:40 np0005539482 podman[273531]: 2025-11-29 05:43:40.378012965 +0000 UTC m=+0.117349670 container attach 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 00:43:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:40 np0005539482 nova_compute[254898]: 2025-11-29 05:43:40.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:41 np0005539482 confident_black[273546]: {
Nov 29 00:43:41 np0005539482 confident_black[273546]:    "0": [
Nov 29 00:43:41 np0005539482 confident_black[273546]:        {
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "devices": [
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "/dev/loop3"
Nov 29 00:43:41 np0005539482 confident_black[273546]:            ],
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_name": "ceph_lv0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_size": "21470642176",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "name": "ceph_lv0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "tags": {
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cluster_name": "ceph",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.crush_device_class": "",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.encrypted": "0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osd_id": "0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.type": "block",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.vdo": "0"
Nov 29 00:43:41 np0005539482 confident_black[273546]:            },
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "type": "block",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "vg_name": "ceph_vg0"
Nov 29 00:43:41 np0005539482 confident_black[273546]:        }
Nov 29 00:43:41 np0005539482 confident_black[273546]:    ],
Nov 29 00:43:41 np0005539482 confident_black[273546]:    "1": [
Nov 29 00:43:41 np0005539482 confident_black[273546]:        {
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "devices": [
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "/dev/loop4"
Nov 29 00:43:41 np0005539482 confident_black[273546]:            ],
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_name": "ceph_lv1",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_size": "21470642176",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "name": "ceph_lv1",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "tags": {
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cluster_name": "ceph",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.crush_device_class": "",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.encrypted": "0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osd_id": "1",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.type": "block",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.vdo": "0"
Nov 29 00:43:41 np0005539482 confident_black[273546]:            },
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "type": "block",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "vg_name": "ceph_vg1"
Nov 29 00:43:41 np0005539482 confident_black[273546]:        }
Nov 29 00:43:41 np0005539482 confident_black[273546]:    ],
Nov 29 00:43:41 np0005539482 confident_black[273546]:    "2": [
Nov 29 00:43:41 np0005539482 confident_black[273546]:        {
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "devices": [
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "/dev/loop5"
Nov 29 00:43:41 np0005539482 confident_black[273546]:            ],
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_name": "ceph_lv2",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_size": "21470642176",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "name": "ceph_lv2",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "tags": {
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.cluster_name": "ceph",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.crush_device_class": "",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.encrypted": "0",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osd_id": "2",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.type": "block",
Nov 29 00:43:41 np0005539482 confident_black[273546]:                "ceph.vdo": "0"
Nov 29 00:43:41 np0005539482 confident_black[273546]:            },
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "type": "block",
Nov 29 00:43:41 np0005539482 confident_black[273546]:            "vg_name": "ceph_vg2"
Nov 29 00:43:41 np0005539482 confident_black[273546]:        }
Nov 29 00:43:41 np0005539482 confident_black[273546]:    ]
Nov 29 00:43:41 np0005539482 confident_black[273546]: }
Nov 29 00:43:41 np0005539482 systemd[1]: libpod-246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2.scope: Deactivated successfully.
Nov 29 00:43:41 np0005539482 podman[273531]: 2025-11-29 05:43:41.161637361 +0000 UTC m=+0.900974066 container died 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 00:43:41 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720-merged.mount: Deactivated successfully.
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.327 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.328 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.328 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.329 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.329 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:43:41 np0005539482 podman[273531]: 2025-11-29 05:43:41.353718131 +0000 UTC m=+1.093054876 container remove 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:43:41 np0005539482 systemd[1]: libpod-conmon-246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2.scope: Deactivated successfully.
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:43:41
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr']
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "format": "json"}]: dispatch
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:41 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:41.462+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a2d6206-4f8b-4475-a6b1-28b365cca976' of type subvolume
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a2d6206-4f8b-4475-a6b1-28b365cca976' of type subvolume
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "force": true, "format": "json"}]: dispatch
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976'' moved to trashcan
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:43:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:43:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:43:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163980884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.768 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.921 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.922 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5027MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.922 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:43:41 np0005539482 nova_compute[254898]: 2025-11-29 05:43:41.923 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:43:42 np0005539482 podman[273732]: 2025-11-29 05:43:42.013347089 +0000 UTC m=+0.038154980 container create 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:43:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:43:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:43:42 np0005539482 systemd[1]: Started libpod-conmon-30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5.scope.
Nov 29 00:43:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:43:42 np0005539482 podman[273732]: 2025-11-29 05:43:41.995660342 +0000 UTC m=+0.020468273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:43:42 np0005539482 podman[273732]: 2025-11-29 05:43:42.095534159 +0000 UTC m=+0.120342090 container init 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:43:42 np0005539482 podman[273732]: 2025-11-29 05:43:42.101569135 +0000 UTC m=+0.126377016 container start 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 00:43:42 np0005539482 podman[273732]: 2025-11-29 05:43:42.104420934 +0000 UTC m=+0.129228825 container attach 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:43:42 np0005539482 stupefied_ellis[273748]: 167 167
Nov 29 00:43:42 np0005539482 systemd[1]: libpod-30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5.scope: Deactivated successfully.
Nov 29 00:43:42 np0005539482 podman[273732]: 2025-11-29 05:43:42.106909814 +0000 UTC m=+0.131717765 container died 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 00:43:42 np0005539482 systemd[1]: var-lib-containers-storage-overlay-acc0fe91d45d1b7bf11b29ddfc950997cdccec4b387ba31008c3c159385357dd-merged.mount: Deactivated successfully.
Nov 29 00:43:42 np0005539482 podman[273732]: 2025-11-29 05:43:42.149176453 +0000 UTC m=+0.173984374 container remove 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.151 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.152 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:43:42 np0005539482 systemd[1]: libpod-conmon-30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5.scope: Deactivated successfully.
Nov 29 00:43:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.203 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.277 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.278 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.295 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 00:43:42 np0005539482 podman[273772]: 2025-11-29 05:43:42.326484115 +0000 UTC m=+0.036126611 container create 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.328 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.348 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:43:42 np0005539482 systemd[1]: Started libpod-conmon-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope.
Nov 29 00:43:42 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:43:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:42 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:43:42 np0005539482 podman[273772]: 2025-11-29 05:43:42.311681809 +0000 UTC m=+0.021324325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:43:42 np0005539482 podman[273772]: 2025-11-29 05:43:42.419794175 +0000 UTC m=+0.129436681 container init 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:43:42 np0005539482 podman[273772]: 2025-11-29 05:43:42.432164093 +0000 UTC m=+0.141806589 container start 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 00:43:42 np0005539482 podman[273772]: 2025-11-29 05:43:42.435600705 +0000 UTC m=+0.145243231 container attach 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:43:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:43:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1624111269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.785 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.791 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.811 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.813 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.813 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 00:43:42 np0005539482 nova_compute[254898]: 2025-11-29 05:43:42.967 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 00:43:43 np0005539482 focused_saha[273790]: {
Nov 29 00:43:43 np0005539482 focused_saha[273790]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "osd_id": 0,
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "type": "bluestore"
Nov 29 00:43:43 np0005539482 focused_saha[273790]:    },
Nov 29 00:43:43 np0005539482 focused_saha[273790]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "osd_id": 1,
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "type": "bluestore"
Nov 29 00:43:43 np0005539482 focused_saha[273790]:    },
Nov 29 00:43:43 np0005539482 focused_saha[273790]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "osd_id": 2,
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:43:43 np0005539482 focused_saha[273790]:        "type": "bluestore"
Nov 29 00:43:43 np0005539482 focused_saha[273790]:    }
Nov 29 00:43:43 np0005539482 focused_saha[273790]: }
Nov 29 00:43:43 np0005539482 systemd[1]: libpod-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope: Deactivated successfully.
Nov 29 00:43:43 np0005539482 conmon[273790]: conmon 575a3778ec15d7c0e45a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope/container/memory.events
Nov 29 00:43:43 np0005539482 podman[273772]: 2025-11-29 05:43:43.383871401 +0000 UTC m=+1.093513907 container died 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:43:43 np0005539482 systemd[1]: var-lib-containers-storage-overlay-62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4-merged.mount: Deactivated successfully.
Nov 29 00:43:43 np0005539482 podman[273772]: 2025-11-29 05:43:43.447289969 +0000 UTC m=+1.156932505 container remove 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:43:43 np0005539482 systemd[1]: libpod-conmon-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope: Deactivated successfully.
Nov 29 00:43:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:43:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:43:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:43:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:43:43 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 3eeaf550-96aa-4a98-baec-815f8e2584d0 does not exist
Nov 29 00:43:43 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 53b52b17-a785-4567-b98c-70f6c9cf4a31 does not exist
Nov 29 00:43:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Nov 29 00:43:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:43:44 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:43:44 np0005539482 nova_compute[254898]: 2025-11-29 05:43:44.961 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:44 np0005539482 nova_compute[254898]: 2025-11-29 05:43:44.962 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:43:44 np0005539482 nova_compute[254898]: 2025-11-29 05:43:44.962 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:43:44 np0005539482 nova_compute[254898]: 2025-11-29 05:43:44.962 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:43:44 np0005539482 nova_compute[254898]: 2025-11-29 05:43:44.983 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa/.meta.tmp'
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa/.meta.tmp' to config b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa/.meta'
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "format": "json"}]: dispatch
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 00:43:45 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 00:43:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:43:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:43:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 2 op/s
Nov 29 00:43:47 np0005539482 podman[273908]: 2025-11-29 05:43:47.031341331 +0000 UTC m=+0.086525577 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 2 op/s
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "format": "json"}]: dispatch
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:48 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:48.958+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '19b69edf-a49a-4027-a0e5-36e1c4984bfa' of type subvolume
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '19b69edf-a49a-4027-a0e5-36e1c4984bfa' of type subvolume
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "force": true, "format": "json"}]: dispatch
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa'' moved to trashcan
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:43:48 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 00:43:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 3 op/s
Nov 29 00:43:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005100610674285342 of space, bias 4.0, pg target 0.6120732809142411 quantized to 16 (current 16)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:43:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf/.meta.tmp'
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf/.meta.tmp' to config b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf/.meta'
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "format": "json"}]: dispatch
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 00:43:52 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 00:43:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:43:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:43:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "format": "json"}]: dispatch
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:16246e8b-77e5-4422-a8a4-1522b5502edf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:16246e8b-77e5-4422-a8a4-1522b5502edf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:43:55 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:55.892+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16246e8b-77e5-4422-a8a4-1522b5502edf' of type subvolume
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16246e8b-77e5-4422-a8a4-1522b5502edf' of type subvolume
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "force": true, "format": "json"}]: dispatch
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf'' moved to trashcan
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:43:55 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 00:43:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:43:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 68 KiB/s wr, 4 op/s
Nov 29 00:43:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 2 op/s
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845/.meta.tmp'
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845/.meta.tmp' to config b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845/.meta'
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "format": "json"}]: dispatch
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 00:43:59 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 00:43:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:43:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:44:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 4 op/s
Nov 29 00:44:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:02 np0005539482 podman[273928]: 2025-11-29 05:44:02.018339113 +0000 UTC m=+0.068452129 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:44:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "format": "json"}]: dispatch
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:44:03 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:44:03.371+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fa3cb891-f31e-45d1-aaa6-1610fdda8845' of type subvolume
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fa3cb891-f31e-45d1-aaa6-1610fdda8845' of type subvolume
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "force": true, "format": "json"}]: dispatch
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845'' moved to trashcan
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:44:03 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 00:44:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 00:44:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:06 np0005539482 podman[273948]: 2025-11-29 05:44:06.072836041 +0000 UTC m=+0.116638215 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 74 KiB/s wr, 4 op/s
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48/.meta.tmp'
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48/.meta.tmp' to config b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48/.meta'
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "format": "json"}]: dispatch
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 00:44:06 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 00:44:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 00:44:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 00:44:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 56 KiB/s wr, 2 op/s
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 83 KiB/s wr, 4 op/s
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "format": "json"}]: dispatch
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:44:10 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:44:10.705+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '479a8e74-0da9-4e81-a8a6-b7eb56d43c48' of type subvolume
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '479a8e74-0da9-4e81-a8a6-b7eb56d43c48' of type subvolume
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "force": true, "format": "json"}]: dispatch
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48'' moved to trashcan
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:44:10 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 00:44:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:44:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:44:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:44:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:44:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:44:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:44:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Nov 29 00:44:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:44:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:44:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:44:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:44:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:44:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Nov 29 00:44:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:44:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/904247315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:44:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:44:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/904247315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341_ffd2f7d1-d553-4c4c-ad2a-79c702a633bc", "force": true, "format": "json"}]: dispatch
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341_ffd2f7d1-d553-4c4c-ad2a-79c702a633bc, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp'
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp' to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta'
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341_ffd2f7d1-d553-4c4c-ad2a-79c702a633bc, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341", "force": true, "format": "json"}]: dispatch
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp'
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp' to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta'
Nov 29 00:44:14 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:44:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 4 op/s
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "format": "json"}]: dispatch
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 00:44:17 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:44:17.932+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '212fdd6d-2482-42c2-82e5-a1ecfd70ce27' of type subvolume
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '212fdd6d-2482-42c2-82e5-a1ecfd70ce27' of type subvolume
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "force": true, "format": "json"}]: dispatch
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27'' moved to trashcan
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 00:44:17 np0005539482 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 00:44:18 np0005539482 podman[273974]: 2025-11-29 05:44:18.01916002 +0000 UTC m=+0.072937855 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 00:44:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 59 KiB/s wr, 3 op/s
Nov 29 00:44:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 82 KiB/s wr, 4 op/s
Nov 29 00:44:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 00:44:22 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:44:22.627 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:44:22 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:44:22.629 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:44:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 29 00:44:23 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 00:44:23 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 29 00:44:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 67 KiB/s wr, 3 op/s
Nov 29 00:44:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 2 op/s
Nov 29 00:44:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 2 op/s
Nov 29 00:44:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Nov 29 00:44:30 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:44:30.631 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:44:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 29 00:44:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 00:44:30 np0005539482 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 29 00:44:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 229 B/s rd, 28 KiB/s wr, 1 op/s
Nov 29 00:44:33 np0005539482 podman[273994]: 2025-11-29 05:44:33.015572153 +0000 UTC m=+0.074119164 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 00:44:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Nov 29 00:44:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 29 00:44:37 np0005539482 podman[274014]: 2025-11-29 05:44:37.058972819 +0000 UTC m=+0.100345118 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 29 00:44:37 np0005539482 nova_compute[254898]: 2025-11-29 05:44:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 29 00:44:39 np0005539482 nova_compute[254898]: 2025-11-29 05:44:39.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:39 np0005539482 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:39 np0005539482 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:39 np0005539482 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:39 np0005539482 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:39 np0005539482 nova_compute[254898]: 2025-11-29 05:44:39.965 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:44:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 0 op/s
Nov 29 00:44:40 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:40 np0005539482 nova_compute[254898]: 2025-11-29 05:44:40.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:44:41
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', 'vms']
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:44:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:44:41 np0005539482 nova_compute[254898]: 2025-11-29 05:44:41.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:41 np0005539482 nova_compute[254898]: 2025-11-29 05:44:41.977 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:44:41 np0005539482 nova_compute[254898]: 2025-11-29 05:44:41.977 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:44:41 np0005539482 nova_compute[254898]: 2025-11-29 05:44:41.977 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:44:41 np0005539482 nova_compute[254898]: 2025-11-29 05:44:41.978 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:44:41 np0005539482 nova_compute[254898]: 2025-11-29 05:44:41.978 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:44:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:44:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:44:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Nov 29 00:44:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:44:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3864051765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.434 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.614 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.615 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.615 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.615 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.667 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.667 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:44:42 np0005539482 nova_compute[254898]: 2025-11-29 05:44:42.689 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:44:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:44:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468906332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:44:43 np0005539482 nova_compute[254898]: 2025-11-29 05:44:43.101 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:44:43 np0005539482 nova_compute[254898]: 2025-11-29 05:44:43.106 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:44:43 np0005539482 nova_compute[254898]: 2025-11-29 05:44:43.140 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:44:43 np0005539482 nova_compute[254898]: 2025-11-29 05:44:43.142 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:44:43 np0005539482 nova_compute[254898]: 2025-11-29 05:44:43.143 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:44:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:44:44 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev bcd232e2-308d-4405-86f3-1fd63d39b039 does not exist
Nov 29 00:44:44 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 42af8ed8-a182-4598-add6-a86a41099650 does not exist
Nov 29 00:44:44 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ce428be3-8b6d-40f4-b401-fc56cbfbfba4 does not exist
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:44:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:44:45 np0005539482 podman[274357]: 2025-11-29 05:44:45.076124752 +0000 UTC m=+0.041580671 container create 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:44:45 np0005539482 systemd[1]: Started libpod-conmon-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope.
Nov 29 00:44:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:44:45 np0005539482 podman[274357]: 2025-11-29 05:44:45.056780591 +0000 UTC m=+0.022236600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:44:45 np0005539482 podman[274357]: 2025-11-29 05:44:45.155873339 +0000 UTC m=+0.121329308 container init 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:44:45 np0005539482 podman[274357]: 2025-11-29 05:44:45.167562616 +0000 UTC m=+0.133018545 container start 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:44:45 np0005539482 podman[274357]: 2025-11-29 05:44:45.170905586 +0000 UTC m=+0.136361525 container attach 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:44:45 np0005539482 affectionate_greider[274374]: 167 167
Nov 29 00:44:45 np0005539482 systemd[1]: libpod-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope: Deactivated successfully.
Nov 29 00:44:45 np0005539482 conmon[274374]: conmon 08dfa279b4c94370a0c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope/container/memory.events
Nov 29 00:44:45 np0005539482 podman[274357]: 2025-11-29 05:44:45.174651165 +0000 UTC m=+0.140107084 container died 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 00:44:45 np0005539482 systemd[1]: var-lib-containers-storage-overlay-185446150ef33fffcbe69642daa2eb7876338c49cad3be33257b99f4d3802ae0-merged.mount: Deactivated successfully.
Nov 29 00:44:45 np0005539482 podman[274357]: 2025-11-29 05:44:45.222840832 +0000 UTC m=+0.188296751 container remove 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:44:45 np0005539482 systemd[1]: libpod-conmon-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope: Deactivated successfully.
Nov 29 00:44:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:44:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:44:45 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:44:45 np0005539482 podman[274398]: 2025-11-29 05:44:45.389151048 +0000 UTC m=+0.056368993 container create d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:44:45 np0005539482 systemd[1]: Started libpod-conmon-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope.
Nov 29 00:44:45 np0005539482 podman[274398]: 2025-11-29 05:44:45.359569254 +0000 UTC m=+0.026787299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:44:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:44:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:45 np0005539482 podman[274398]: 2025-11-29 05:44:45.486506933 +0000 UTC m=+0.153724938 container init d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:44:45 np0005539482 podman[274398]: 2025-11-29 05:44:45.49943931 +0000 UTC m=+0.166657255 container start d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:44:45 np0005539482 podman[274398]: 2025-11-29 05:44:45.50278141 +0000 UTC m=+0.169999405 container attach d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:44:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 00:44:46 np0005539482 strange_bardeen[274414]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:44:46 np0005539482 strange_bardeen[274414]: --> relative data size: 1.0
Nov 29 00:44:46 np0005539482 strange_bardeen[274414]: --> All data devices are unavailable
Nov 29 00:44:46 np0005539482 systemd[1]: libpod-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope: Deactivated successfully.
Nov 29 00:44:46 np0005539482 conmon[274414]: conmon d4df85fae5f7d74c4a1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope/container/memory.events
Nov 29 00:44:46 np0005539482 podman[274398]: 2025-11-29 05:44:46.511871522 +0000 UTC m=+1.179089467 container died d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:44:46 np0005539482 systemd[1]: var-lib-containers-storage-overlay-9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f-merged.mount: Deactivated successfully.
Nov 29 00:44:46 np0005539482 podman[274398]: 2025-11-29 05:44:46.567316641 +0000 UTC m=+1.234534586 container remove d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 00:44:46 np0005539482 systemd[1]: libpod-conmon-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope: Deactivated successfully.
Nov 29 00:44:47 np0005539482 podman[274599]: 2025-11-29 05:44:47.207464467 +0000 UTC m=+0.054820315 container create fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:44:47 np0005539482 systemd[1]: Started libpod-conmon-fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629.scope.
Nov 29 00:44:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:44:47 np0005539482 podman[274599]: 2025-11-29 05:44:47.184542452 +0000 UTC m=+0.031898360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:44:47 np0005539482 podman[274599]: 2025-11-29 05:44:47.287232334 +0000 UTC m=+0.134588202 container init fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 00:44:47 np0005539482 podman[274599]: 2025-11-29 05:44:47.29377514 +0000 UTC m=+0.141130988 container start fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:44:47 np0005539482 podman[274599]: 2025-11-29 05:44:47.297307445 +0000 UTC m=+0.144663323 container attach fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:44:47 np0005539482 youthful_perlman[274616]: 167 167
Nov 29 00:44:47 np0005539482 systemd[1]: libpod-fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629.scope: Deactivated successfully.
Nov 29 00:44:47 np0005539482 podman[274599]: 2025-11-29 05:44:47.299701811 +0000 UTC m=+0.147057649 container died fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:44:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d8774ef51880a6cb5097baf83b07c13931c115620f52f55433752315f0c4fed9-merged.mount: Deactivated successfully.
Nov 29 00:44:47 np0005539482 podman[274599]: 2025-11-29 05:44:47.333562676 +0000 UTC m=+0.180918504 container remove fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:44:47 np0005539482 systemd[1]: libpod-conmon-fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629.scope: Deactivated successfully.
Nov 29 00:44:47 np0005539482 podman[274641]: 2025-11-29 05:44:47.504752219 +0000 UTC m=+0.051997648 container create 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 00:44:47 np0005539482 systemd[1]: Started libpod-conmon-955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff.scope.
Nov 29 00:44:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:44:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:47 np0005539482 podman[274641]: 2025-11-29 05:44:47.573101255 +0000 UTC m=+0.120346704 container init 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:44:47 np0005539482 podman[274641]: 2025-11-29 05:44:47.48887832 +0000 UTC m=+0.036123769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:44:47 np0005539482 podman[274641]: 2025-11-29 05:44:47.588681015 +0000 UTC m=+0.135926484 container start 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:44:47 np0005539482 podman[274641]: 2025-11-29 05:44:47.593061569 +0000 UTC m=+0.140307018 container attach 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 00:44:48 np0005539482 nova_compute[254898]: 2025-11-29 05:44:48.139 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:48 np0005539482 nova_compute[254898]: 2025-11-29 05:44:48.141 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:44:48 np0005539482 nova_compute[254898]: 2025-11-29 05:44:48.141 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:44:48 np0005539482 nova_compute[254898]: 2025-11-29 05:44:48.141 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:44:48 np0005539482 nova_compute[254898]: 2025-11-29 05:44:48.165 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:44:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 00:44:48 np0005539482 musing_banzai[274658]: {
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:    "0": [
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:        {
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "devices": [
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "/dev/loop3"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            ],
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_name": "ceph_lv0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_size": "21470642176",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "name": "ceph_lv0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "tags": {
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cluster_name": "ceph",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.crush_device_class": "",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.encrypted": "0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osd_id": "0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.type": "block",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.vdo": "0"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            },
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "type": "block",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "vg_name": "ceph_vg0"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:        }
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:    ],
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:    "1": [
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:        {
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "devices": [
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "/dev/loop4"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            ],
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_name": "ceph_lv1",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_size": "21470642176",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "name": "ceph_lv1",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "tags": {
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cluster_name": "ceph",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.crush_device_class": "",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.encrypted": "0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osd_id": "1",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.type": "block",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.vdo": "0"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            },
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "type": "block",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "vg_name": "ceph_vg1"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:        }
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:    ],
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:    "2": [
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:        {
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "devices": [
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "/dev/loop5"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            ],
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_name": "ceph_lv2",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_size": "21470642176",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "name": "ceph_lv2",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "tags": {
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.cluster_name": "ceph",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.crush_device_class": "",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.encrypted": "0",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osd_id": "2",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.type": "block",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:                "ceph.vdo": "0"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            },
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "type": "block",
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:            "vg_name": "ceph_vg2"
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:        }
Nov 29 00:44:48 np0005539482 musing_banzai[274658]:    ]
Nov 29 00:44:48 np0005539482 musing_banzai[274658]: }
Nov 29 00:44:48 np0005539482 systemd[1]: libpod-955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff.scope: Deactivated successfully.
Nov 29 00:44:48 np0005539482 podman[274641]: 2025-11-29 05:44:48.356240872 +0000 UTC m=+0.903486301 container died 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:44:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896-merged.mount: Deactivated successfully.
Nov 29 00:44:48 np0005539482 podman[274641]: 2025-11-29 05:44:48.419130608 +0000 UTC m=+0.966376047 container remove 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:44:48 np0005539482 systemd[1]: libpod-conmon-955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff.scope: Deactivated successfully.
Nov 29 00:44:48 np0005539482 podman[274668]: 2025-11-29 05:44:48.48060519 +0000 UTC m=+0.074243397 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:44:49 np0005539482 podman[274837]: 2025-11-29 05:44:49.013948976 +0000 UTC m=+0.038189340 container create 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:44:49 np0005539482 systemd[1]: Started libpod-conmon-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope.
Nov 29 00:44:49 np0005539482 podman[274837]: 2025-11-29 05:44:48.995552788 +0000 UTC m=+0.019793192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:44:49 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:44:49 np0005539482 podman[274837]: 2025-11-29 05:44:49.12217603 +0000 UTC m=+0.146416484 container init 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:44:49 np0005539482 podman[274837]: 2025-11-29 05:44:49.130324684 +0000 UTC m=+0.154565048 container start 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:44:49 np0005539482 podman[274837]: 2025-11-29 05:44:49.133662613 +0000 UTC m=+0.157903007 container attach 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 00:44:49 np0005539482 kind_banzai[274853]: 167 167
Nov 29 00:44:49 np0005539482 systemd[1]: libpod-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope: Deactivated successfully.
Nov 29 00:44:49 np0005539482 conmon[274853]: conmon 944d40f710cdda4d6108 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope/container/memory.events
Nov 29 00:44:49 np0005539482 podman[274837]: 2025-11-29 05:44:49.136695345 +0000 UTC m=+0.160935749 container died 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:44:49 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f005368ff4bc0b921e5a693a4d1f9564e6ec069dd89b4f814e819a2ec646a774-merged.mount: Deactivated successfully.
Nov 29 00:44:49 np0005539482 podman[274837]: 2025-11-29 05:44:49.181502991 +0000 UTC m=+0.205743395 container remove 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 00:44:49 np0005539482 systemd[1]: libpod-conmon-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope: Deactivated successfully.
Nov 29 00:44:49 np0005539482 podman[274877]: 2025-11-29 05:44:49.366342547 +0000 UTC m=+0.040687708 container create 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:44:49 np0005539482 systemd[1]: Started libpod-conmon-44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd.scope.
Nov 29 00:44:49 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:44:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:49 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:44:49 np0005539482 podman[274877]: 2025-11-29 05:44:49.435961813 +0000 UTC m=+0.110306984 container init 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:44:49 np0005539482 podman[274877]: 2025-11-29 05:44:49.349970029 +0000 UTC m=+0.024315240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:44:49 np0005539482 podman[274877]: 2025-11-29 05:44:49.446037273 +0000 UTC m=+0.120382434 container start 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:44:49 np0005539482 podman[274877]: 2025-11-29 05:44:49.448759318 +0000 UTC m=+0.123104499 container attach 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:44:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 00:44:50 np0005539482 modest_merkle[274894]: {
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "osd_id": 0,
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "type": "bluestore"
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:    },
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "osd_id": 1,
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "type": "bluestore"
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:    },
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "osd_id": 2,
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:        "type": "bluestore"
Nov 29 00:44:50 np0005539482 modest_merkle[274894]:    }
Nov 29 00:44:50 np0005539482 modest_merkle[274894]: }
Nov 29 00:44:50 np0005539482 systemd[1]: libpod-44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd.scope: Deactivated successfully.
Nov 29 00:44:50 np0005539482 podman[274877]: 2025-11-29 05:44:50.338776648 +0000 UTC m=+1.013121809 container died 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:44:50 np0005539482 systemd[1]: var-lib-containers-storage-overlay-263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff-merged.mount: Deactivated successfully.
Nov 29 00:44:50 np0005539482 podman[274877]: 2025-11-29 05:44:50.392427434 +0000 UTC m=+1.066772605 container remove 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:44:50 np0005539482 systemd[1]: libpod-conmon-44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd.scope: Deactivated successfully.
Nov 29 00:44:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:44:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:44:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:44:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:44:50 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ef25af65-0e7c-493a-bd9c-dd645fc30d5a does not exist
Nov 29 00:44:50 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f38a7d0c-6751-4ece-89f8-070dd9c068cd does not exist
Nov 29 00:44:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:44:51 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:44:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:44:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:44:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:44:55 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:44:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:44:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:45:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:04 np0005539482 podman[274991]: 2025-11-29 05:45:04.020254961 +0000 UTC m=+0.059646730 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:45:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:45:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:08 np0005539482 podman[275012]: 2025-11-29 05:45:08.126805897 +0000 UTC m=+0.168160121 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:45:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:45:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:45:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:45:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:45:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:45:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:45:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:45:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:12 np0005539482 systemd-logind[793]: New session 51 of user zuul.
Nov 29 00:45:12 np0005539482 systemd[1]: Started Session 51 of User zuul.
Nov 29 00:45:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:45:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:45:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:45:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:45:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:45:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:45:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:45:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3802455169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:45:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:45:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3802455169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:45:15 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14515 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:15 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:45:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344424986' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.304038) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116304069, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2383, "num_deletes": 505, "total_data_size": 3399465, "memory_usage": 3449168, "flush_reason": "Manual Compaction"}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116321444, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3105267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26246, "largest_seqno": 28628, "table_properties": {"data_size": 3095353, "index_size": 5704, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25291, "raw_average_key_size": 20, "raw_value_size": 3072861, "raw_average_value_size": 2476, "num_data_blocks": 252, "num_entries": 1241, "num_filter_entries": 1241, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394907, "oldest_key_time": 1764394907, "file_creation_time": 1764395116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 17458 microseconds, and 8436 cpu microseconds.
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.321496) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3105267 bytes OK
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.321516) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.323327) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.323344) EVENT_LOG_v1 {"time_micros": 1764395116323338, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.323363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3388375, prev total WAL file size 3388375, number of live WAL files 2.
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.324517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3032KB)], [59(9974KB)]
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116324580, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13318741, "oldest_snapshot_seqno": -1}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5853 keys, 8814161 bytes, temperature: kUnknown
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116373504, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8814161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8774342, "index_size": 24093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 146251, "raw_average_key_size": 24, "raw_value_size": 8668741, "raw_average_value_size": 1481, "num_data_blocks": 988, "num_entries": 5853, "num_filter_entries": 5853, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.373749) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8814161 bytes
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.375130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 271.6 rd, 179.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.7 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(7.1) write-amplify(2.8) OK, records in: 6863, records dropped: 1010 output_compression: NoCompression
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.375147) EVENT_LOG_v1 {"time_micros": 1764395116375139, "job": 32, "event": "compaction_finished", "compaction_time_micros": 49034, "compaction_time_cpu_micros": 19910, "output_level": 6, "num_output_files": 1, "total_output_size": 8814161, "num_input_records": 6863, "num_output_records": 5853, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116375632, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116377356, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.324423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:45:16 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:45:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:19 np0005539482 podman[275327]: 2025-11-29 05:45:19.029124427 +0000 UTC m=+0.078903969 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 00:45:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:20 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:45:21 np0005539482 ovs-vsctl[275391]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 00:45:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:22 np0005539482 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 00:45:23 np0005539482 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 00:45:23 np0005539482 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 00:45:23 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: cache status {prefix=cache status} (starting...)
Nov 29 00:45:23 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: client ls {prefix=client ls} (starting...)
Nov 29 00:45:23 np0005539482 lvm[275722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 00:45:23 np0005539482 lvm[275722]: VG ceph_vg0 finished
Nov 29 00:45:23 np0005539482 lvm[275727]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 00:45:23 np0005539482 lvm[275727]: VG ceph_vg1 finished
Nov 29 00:45:24 np0005539482 lvm[275760]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 00:45:24 np0005539482 lvm[275760]: VG ceph_vg2 finished
Nov 29 00:45:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14521 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 00:45:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 00:45:24 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14523 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 00:45:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 00:45:24 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425794923' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 00:45:25 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 00:45:25 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 00:45:25 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14529 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:25 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:45:25 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:45:25.352+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807027424' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:45:25 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 00:45:25 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: ops {prefix=ops} (starting...)
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573578558' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457309832' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 00:45:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:45:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 00:45:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352512391' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 00:45:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 00:45:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532092899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 00:45:26 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session ls {prefix=session ls} (starting...)
Nov 29 00:45:26 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: status {prefix=status} (starting...)
Nov 29 00:45:26 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14541 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 00:45:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580026601' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 00:45:27 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14545 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396287443' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3808788625' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139287242' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 00:45:27 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517593296' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 00:45:28 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 00:45:28 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106118019' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 00:45:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:28 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14557 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:28 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:45:28.324+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 00:45:28 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 00:45:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 00:45:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904735651' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 00:45:29 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 00:45:29 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296138431' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 00:45:29 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14563 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:29 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:29 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:29 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 767992 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769139 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.991065025s of 13.020785332s, submitted: 8
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771436 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772583 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 1531904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774877 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 1515520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774877 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.841436386s of 18.878847122s, submitted: 10
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 1490944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 1490944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 1474560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778319 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 1449984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779466 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.001470566s of 12.036796570s, submitted: 10
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781762 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 1409024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 1392640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 1392640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 1384448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785204 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 1384448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 1376256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 1376256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 1376256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.964669228s of 11.010027885s, submitted: 10
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787498 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789793 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 1327104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 1310720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 1310720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 792088 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.076447487s of 11.113275528s, submitted: 10
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795533 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796682 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.979099274s of 13.024011612s, submitted: 8
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797831 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1212416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798980 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801277 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.892942429s of 13.915967941s, submitted: 8
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802425 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803573 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804722 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.017806053s of 13.042335510s, submitted: 6
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807016 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808163 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.996475220s of 12.016470909s, submitted: 6
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810458 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811605 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812753 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.752367973s of 14.781843185s, submitted: 8
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815047 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816195 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 417792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 417792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 327680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 327680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 18.29 MB, 0.03 MB/s#012Interval WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.014465332s of 299.041870117s, submitted: 8
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557761d1dc00
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 335872 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 417792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14567 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5631 writes, 23K keys, 5631 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5631 writes, 860 syncs, 6.55 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.950073242s of 600.213012695s, submitted: 90
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 200.325714111s of 200.562088013s, submitted: 90
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 17432576 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916268 data_alloc: 218103808 data_used: 180224
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 123 ms_handle_reset con 0x557763f08000 session 0x5577631b30e0
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 17440768 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17481728 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 124 ms_handle_reset con 0x557765b97c00 session 0x557765010000
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe39000/0x0/0x4ffc00000, data 0xd2e970/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 17408000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925293 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe38000/0x0/0x4ffc00000, data 0xd2e993/0xde4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.306422234s of 10.512654305s, submitted: 45
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.103341103s of 12.113625526s, submitted: 13
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 10
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 11
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.626058578s of 10.632491112s, submitted: 2
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.001748085s of 12.013872147s, submitted: 4
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930625 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.040862083s of 12.053675652s, submitted: 4
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928167 data_alloc: 218103808 data_used: 184320
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.859819412s of 16.871786118s, submitted: 28
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 12
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 17235968 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939171 data_alloc: 218103808 data_used: 200704
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe2e000/0x0/0x4ffc00000, data 0xd33b54/0xdee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.211776733s of 18.236698151s, submitted: 18
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbe29000/0x0/0x4ffc00000, data 0xd372c6/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe25000/0x0/0x4ffc00000, data 0xd38edc/0xdf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951491 data_alloc: 218103808 data_used: 208896
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe23000/0x0/0x4ffc00000, data 0xd3aaf2/0xdfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe1f000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955423 data_alloc: 218103808 data_used: 212992
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.339121819s of 10.671369553s, submitted: 123
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 17047552 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe19000/0x0/0x4ffc00000, data 0xd3fd71/0xe03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961321 data_alloc: 218103808 data_used: 221184
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe1c000/0x0/0x4ffc00000, data 0xd3fcd6/0xe02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959079 data_alloc: 218103808 data_used: 221184
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.049346924s of 10.169968605s, submitted: 40
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965021 data_alloc: 218103808 data_used: 229376
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 15917056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964141 data_alloc: 218103808 data_used: 229376
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991994858s of 11.068979263s, submitted: 40
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0xd433da/0xe09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968139 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.993079185s of 12.021212578s, submitted: 14
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.520608902s of 10.532555580s, submitted: 3
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 15802368 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.498485565s of 13.504686356s, submitted: 2
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 ms_handle_reset con 0x557765b96800 session 0x557764f4fe00
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 13
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971135 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 14983168 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974495 data_alloc: 218103808 data_used: 237568
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.806947708s of 11.988073349s, submitted: 235
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980069 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0b000/0x0/0x4ffc00000, data 0xd4858e/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.121107101s of 24.133726120s, submitted: 13
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a08c/0xe15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984139 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a127/0xe16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986619 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986571 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.074976921s of 12.103597641s, submitted: 7
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a157/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988043 data_alloc: 218103808 data_used: 245760
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a185/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 140 handle_osd_map epochs [141,142], i have 140, src has [1,142]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993155 data_alloc: 218103808 data_used: 253952
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe01000/0x0/0x4ffc00000, data 0xd4d8db/0xe1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.097406387s of 11.276707649s, submitted: 61
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997329 data_alloc: 218103808 data_used: 262144
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f3f4/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998041 data_alloc: 218103808 data_used: 262144
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.991775513s of 10.043452263s, submitted: 26
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 13819904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000645 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e25/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002413 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f7f/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.350621223s of 11.425502777s, submitted: 31
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf5000/0x0/0x4ffc00000, data 0xd51047/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007621 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 12541952 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf2000/0x0/0x4ffc00000, data 0xd511a7/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011157 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 12509184 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 12460032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd5117b/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.751610756s of 11.044014931s, submitted: 37
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 12419072 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010409 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf4000/0x0/0x4ffc00000, data 0xd510b1/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 12386304 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50fe8/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010199 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf6000/0x0/0x4ffc00000, data 0xd50fb7/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006959 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.834184647s of 10.926655769s, submitted: 30
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbd/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008855 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50e84/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010495 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022552490s of 10.158326149s, submitted: 18
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008551 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 12271616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.966061592s of 12.095813751s, submitted: 15
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbc/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009501 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e51/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.408202171s of 10.675523758s, submitted: 17
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011349 data_alloc: 218103808 data_used: 270336
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f47/0xe24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd526bd/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018011 data_alloc: 218103808 data_used: 278528
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0xd5a18b/0xe2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.418998718s of 10.583705902s, submitted: 59
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdc6000/0x0/0x4ffc00000, data 0xd839d6/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024267 data_alloc: 218103808 data_used: 278528
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdb9000/0x0/0x4ffc00000, data 0xd9187a/0xe64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [1])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 7217152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 6660096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 6668288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fabc9000/0x0/0x4ffc00000, data 0xde24b0/0xeb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 6643712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028237 data_alloc: 218103808 data_used: 278528
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 6725632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 6709248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fab98000/0x0/0x4ffc00000, data 0xe11be3/0xee5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5660672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 5603328 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.068192482s of 10.000307083s, submitted: 80
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab94000/0x0/0x4ffc00000, data 0xe13646/0xee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:30 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037963 data_alloc: 218103808 data_used: 286720
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab47000/0x0/0x4ffc00000, data 0xe61e3d/0xf36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4784128 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4300800 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 4071424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaea000/0x0/0x4ffc00000, data 0xebf2b3/0xf94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043289 data_alloc: 218103808 data_used: 294912
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3252224 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faab2000/0x0/0x4ffc00000, data 0xef3a69/0xfca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2957312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2662400 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xf07fb6/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.752583504s of 10.000064850s, submitted: 81
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 1425408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048977 data_alloc: 218103808 data_used: 294912
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1417216 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faa4a000/0x0/0x4ffc00000, data 0xf5e02d/0x1034000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86704128 unmapped: 2375680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 2940928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067411 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 1630208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.663159370s of 10.000439644s, submitted: 117
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2211840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080451 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 3194880 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4ea000/0x0/0x4ffc00000, data 0x10a58fb/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 3219456 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 14
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10afbb3/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 1466368 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093257 data_alloc: 218103808 data_used: 307200
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 1835008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x1157f2b/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1818624 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89808896 unmapped: 1368064 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7984 writes, 30K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 7984 writes, 1865 syncs, 4.28 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2353 writes, 6787 keys, 2353 commit groups, 1.0 writes per commit group, ingest: 7.64 MB, 0.01 MB/s#012Interval WAL: 2353 writes, 1005 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.250799179s of 10.555690765s, submitted: 96
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157dbe/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088671 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087325 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557764265800
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 2326528 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.960803032s of 10.004592896s, submitted: 14
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087841 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 2310144 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157cb6/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 2293760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089913 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 2285568 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157db0/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087327 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.876619339s of 11.963118553s, submitted: 28
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157c4c/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090125 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157ce1/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x1157d0c/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089131 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c46/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157b7f/0x1230000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.725197792s of 12.809599876s, submitted: 25
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 2228224 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.085176468s of 10.108474731s, submitted: 6
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092185 data_alloc: 218103808 data_used: 311296
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090523 data_alloc: 218103808 data_used: 311296
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 2179072 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 2260992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 15
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x115cc1b/0x1236000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059730530s of 10.469105721s, submitted: 159
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa435000/0x0/0x4ffc00000, data 0x115ccb6/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x115e719/0x123a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x115e7b4/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104005 data_alloc: 218103808 data_used: 319488
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.492309570s of 11.524030685s, submitted: 14
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x115e8c4/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111955 data_alloc: 218103808 data_used: 327680
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 2088960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x11605e0/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89096192 unmapped: 2080768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117181 data_alloc: 218103808 data_used: 335872
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x1161ec8/0x1243000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.267296791s of 10.416739464s, submitted: 51
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 2048000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x1161e2d/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120383 data_alloc: 218103808 data_used: 344064
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 2031616 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 155 ms_handle_reset con 0x557763f08000 session 0x55776350d0e0
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 598016 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0x1165511/0x1249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 16
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124387 data_alloc: 218103808 data_used: 344064
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.574859619s of 10.815853119s, submitted: 264
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0x1167127/0x124c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 157 handle_osd_map epochs [158,159], i have 157, src has [1,159]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137003 data_alloc: 218103808 data_used: 352256
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139835 data_alloc: 218103808 data_used: 352256
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.203613281s of 10.384685516s, submitted: 64
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141625 data_alloc: 218103808 data_used: 352256
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c4dc/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 17
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153121 data_alloc: 218103808 data_used: 360448
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91652096 unmapped: 1622016 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x116fc59/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.928189278s of 10.833756447s, submitted: 92
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146159 data_alloc: 218103808 data_used: 364544
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa416000/0x0/0x4ffc00000, data 0x116fa7c/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 1572864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 00:45:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882396314' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.228721619s of 64.248054504s, submitted: 16
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 ms_handle_reset con 0x557764264c00 session 0x5577635ba1e0
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 1343488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 18
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x11715ba/0x125c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.423639297s of 11.447608948s, submitted: 183
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155219 data_alloc: 218103808 data_used: 380928
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.546059608s of 12.619788170s, submitted: 25
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158853 data_alloc: 218103808 data_used: 389120
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa40d000/0x0/0x4ffc00000, data 0x1174b68/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.282610893s of 14.913866997s, submitted: 51
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.777770996s of 21.886068344s, submitted: 15
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.618280411s of 16.621215820s, submitted: 1
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.889047623s of 25.900033951s, submitted: 3
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166389 data_alloc: 218103808 data_used: 409600
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 1228800 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92053504 unmapped: 1220608 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}'
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'config show' '{prefix=config show}'
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 2285568 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 2351104 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:30 np0005539482 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}'
Nov 29 00:45:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14569 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 00:45:30 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146028127' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 00:45:30 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:45:30 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14573 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 00:45:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542233319' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 00:45:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:31 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 00:45:31 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147464233' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 00:45:31 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14581 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 00:45:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797536980' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 00:45:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14585 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:45:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 00:45:32 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836742134' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 00:45:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14589 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:45:32 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14593 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:45:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 00:45:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031763189' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 00:45:33 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14599 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:45:33 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:45:33.781+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:45:33 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128209929' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100858855' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 00:45:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082708319' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 00:45:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606643643' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813889 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 925696 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815036 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815036 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815036 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.166151047s of 17.183015823s, submitted: 4
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817333 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819631 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.017621994s of 12.047937393s, submitted: 8
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820779 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823073 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825369 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.896008492s of 13.930132866s, submitted: 10
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826517 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827664 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 729088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828811 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828811 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.928232193s of 20.949409485s, submitted: 6
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832254 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832254 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.023162842s of 10.054508209s, submitted: 8
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836842 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 614400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841433 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 606208 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 598016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 598016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.778158188s of 10.833313942s, submitted: 14
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 598016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 581632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844878 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 581632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 573440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 573440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 565248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 565248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847173 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 565248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 557056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 557056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948025703s of 10.981702805s, submitted: 10
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849468 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850615 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 507904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850615 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.895287514s of 11.917461395s, submitted: 6
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852909 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856350 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.140506744s of 12.177715302s, submitted: 10
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857498 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 98304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 98304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 32768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 32768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 32768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 8192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 8192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 8192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1032192 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1032192 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1024000 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1024000 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1024000 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1015808 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1015808 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1007616 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1007616 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1007616 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 876544 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 876544 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 868352 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 868352 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 868352 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 860160 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 860160 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 851968 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 851968 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 851968 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 819200 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 819200 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 811008 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 811008 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 811008 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 802816 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 802816 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 794624 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 794624 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 786432 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 786432 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 786432 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 778240 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 778240 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 770048 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 770048 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 761856 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 761856 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 761856 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 753664 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 753664 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 745472 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 745472 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 745472 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 737280 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 737280 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 729088 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 729088 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 720896 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 720896 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 712704 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 712704 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 712704 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 704512 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 704512 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 696320 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 696320 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 696320 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 688128 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 688128 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 679936 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 679936 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 671744 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 671744 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 671744 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 663552 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 663552 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 655360 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 655360 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:34 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 647168 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:45:58 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 00:45:58 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474353515' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 00:45:59 np0005539482 rsyslogd[1003]: imjournal: 18514 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 00:45:59 np0005539482 systemd[1]: Started libpod-conmon-a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee.scope.
Nov 29 00:45:59 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 00:45:59 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437090990' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 00:45:59 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:45:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:45:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:45:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:45:59 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:45:59 np0005539482 podman[281882]: 2025-11-29 05:45:59.379517438 +0000 UTC m=+1.225182833 container init a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:45:59 np0005539482 podman[281882]: 2025-11-29 05:45:59.387478267 +0000 UTC m=+1.233143632 container start a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:45:59 np0005539482 podman[281882]: 2025-11-29 05:45:59.391220306 +0000 UTC m=+1.236885671 container attach a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:45:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14753 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:45:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14755 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:46:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]: {
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "osd_id": 0,
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "type": "bluestore"
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:    },
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "osd_id": 1,
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "type": "bluestore"
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:    },
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "osd_id": 2,
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:        "type": "bluestore"
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]:    }
Nov 29 00:46:00 np0005539482 laughing_rhodes[281977]: }
Nov 29 00:46:00 np0005539482 systemd[1]: libpod-a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee.scope: Deactivated successfully.
Nov 29 00:46:00 np0005539482 podman[281882]: 2025-11-29 05:46:00.335698412 +0000 UTC m=+2.181363767 container died a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973467258' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 00:46:00 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49-merged.mount: Deactivated successfully.
Nov 29 00:46:00 np0005539482 podman[281882]: 2025-11-29 05:46:00.731670639 +0000 UTC m=+2.577336004 container remove a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 00:46:00 np0005539482 systemd[1]: libpod-conmon-a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee.scope: Deactivated successfully.
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251823936' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:46:00 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7abb6406-2d4a-4be5-b21a-a2366540b388 does not exist
Nov 29 00:46:00 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 4d00d015-b9c1-4d85-9292-ea19bd5fae69 does not exist
Nov 29 00:46:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:01 np0005539482 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 00:46:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:46:01 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:46:02 np0005539482 systemd[1]: Starting Time & Date Service...
Nov 29 00:46:02 np0005539482 systemd[1]: Started Time & Date Service.
Nov 29 00:46:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:06 np0005539482 podman[282615]: 2025-11-29 05:46:06.047793306 +0000 UTC m=+0.084576013 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:46:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:10 np0005539482 podman[282636]: 2025-11-29 05:46:10.07057445 +0000 UTC m=+0.120952738 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:46:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:46:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:46:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:46:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:46:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:46:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:46:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:46:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:46:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:46:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:46:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:46:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:46:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:46:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737330326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:46:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:46:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737330326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:46:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:20 np0005539482 podman[282662]: 2025-11-29 05:46:20.012722291 +0000 UTC m=+0.055132592 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 00:46:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:30 np0005539482 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 00:46:30 np0005539482 systemd[1]: session-51.scope: Consumed 2min 34.815s CPU time, 763.2M memory peak, read 280.1M from disk, written 214.8M to disk.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: Session 51 logged out. Waiting for processes to exit.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: Removed session 51.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: New session 52 of user zuul.
Nov 29 00:46:30 np0005539482 systemd[1]: Started Session 52 of User zuul.
Nov 29 00:46:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:30 np0005539482 systemd[1]: session-52.scope: Deactivated successfully.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: Removed session 52.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: New session 53 of user zuul.
Nov 29 00:46:30 np0005539482 systemd[1]: Started Session 53 of User zuul.
Nov 29 00:46:30 np0005539482 systemd[1]: session-53.scope: Deactivated successfully.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Nov 29 00:46:30 np0005539482 systemd-logind[793]: Removed session 53.
Nov 29 00:46:32 np0005539482 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 00:46:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:32 np0005539482 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 00:46:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:36 np0005539482 podman[282746]: 2025-11-29 05:46:36.548123268 +0000 UTC m=+0.091811436 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 00:46:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:39 np0005539482 nova_compute[254898]: 2025-11-29 05:46:39.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:40 np0005539482 nova_compute[254898]: 2025-11-29 05:46:40.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:41 np0005539482 podman[282767]: 2025-11-29 05:46:41.071728396 +0000 UTC m=+0.120051057 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:46:41
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.meta']
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:46:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:46:41 np0005539482 nova_compute[254898]: 2025-11-29 05:46:41.950 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:41 np0005539482 nova_compute[254898]: 2025-11-29 05:46:41.965 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:46:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:46:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:42 np0005539482 nova_compute[254898]: 2025-11-29 05:46:42.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:42 np0005539482 nova_compute[254898]: 2025-11-29 05:46:42.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:42 np0005539482 nova_compute[254898]: 2025-11-29 05:46:42.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:42 np0005539482 nova_compute[254898]: 2025-11-29 05:46:42.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:46:42 np0005539482 nova_compute[254898]: 2025-11-29 05:46:42.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.060 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:46:43 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:46:43 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681471638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.501 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.659 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.660 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4972MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.660 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.661 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.891 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.892 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:46:43 np0005539482 nova_compute[254898]: 2025-11-29 05:46:43.910 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:46:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:44 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:46:44 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606435426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:46:44 np0005539482 nova_compute[254898]: 2025-11-29 05:46:44.306 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:46:44 np0005539482 nova_compute[254898]: 2025-11-29 05:46:44.310 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:46:44 np0005539482 nova_compute[254898]: 2025-11-29 05:46:44.343 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:46:44 np0005539482 nova_compute[254898]: 2025-11-29 05:46:44.344 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:46:44 np0005539482 nova_compute[254898]: 2025-11-29 05:46:44.345 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:46:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:50 np0005539482 nova_compute[254898]: 2025-11-29 05:46:50.340 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:50 np0005539482 nova_compute[254898]: 2025-11-29 05:46:50.341 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:46:50 np0005539482 nova_compute[254898]: 2025-11-29 05:46:50.341 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:46:50 np0005539482 nova_compute[254898]: 2025-11-29 05:46:50.341 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:46:51 np0005539482 podman[282839]: 2025-11-29 05:46:51.026092236 +0000 UTC m=+0.072202228 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:46:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:46:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:52 np0005539482 nova_compute[254898]: 2025-11-29 05:46:52.426 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:46:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:46:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:46:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:47:01 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 94938caa-906b-4e1e-926e-7bfe26b9392c does not exist
Nov 29 00:47:01 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0cc35450-1361-4d0c-9efb-2ef5df8de886 does not exist
Nov 29 00:47:01 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev fc1eb606-2a80-427c-b923-0e6796199957 does not exist
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:47:01 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:47:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:47:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:47:02 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:47:02 np0005539482 podman[283133]: 2025-11-29 05:47:02.376253748 +0000 UTC m=+0.062666862 container create 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 00:47:02 np0005539482 systemd[1]: Started libpod-conmon-2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2.scope.
Nov 29 00:47:02 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:47:02 np0005539482 podman[283133]: 2025-11-29 05:47:02.340355534 +0000 UTC m=+0.026768658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:47:02 np0005539482 podman[283133]: 2025-11-29 05:47:02.551015595 +0000 UTC m=+0.237428709 container init 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:47:02 np0005539482 podman[283133]: 2025-11-29 05:47:02.55840664 +0000 UTC m=+0.244819754 container start 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:47:02 np0005539482 fervent_turing[283149]: 167 167
Nov 29 00:47:02 np0005539482 systemd[1]: libpod-2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2.scope: Deactivated successfully.
Nov 29 00:47:02 np0005539482 podman[283133]: 2025-11-29 05:47:02.572797452 +0000 UTC m=+0.259210566 container attach 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:47:02 np0005539482 podman[283133]: 2025-11-29 05:47:02.573370707 +0000 UTC m=+0.259783811 container died 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:47:02 np0005539482 systemd[1]: var-lib-containers-storage-overlay-696ce470ec34c27bc473c63fcb478980495bd1bd7c8f6a8a9da86893f30e877c-merged.mount: Deactivated successfully.
Nov 29 00:47:02 np0005539482 podman[283133]: 2025-11-29 05:47:02.663093611 +0000 UTC m=+0.349506725 container remove 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 00:47:02 np0005539482 systemd[1]: libpod-conmon-2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2.scope: Deactivated successfully.
Nov 29 00:47:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:02 np0005539482 podman[283177]: 2025-11-29 05:47:02.793088023 +0000 UTC m=+0.021153985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:47:02 np0005539482 podman[283177]: 2025-11-29 05:47:02.924030947 +0000 UTC m=+0.152096929 container create 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:47:03 np0005539482 systemd[1]: Started libpod-conmon-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope.
Nov 29 00:47:03 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:47:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:03 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:03 np0005539482 podman[283177]: 2025-11-29 05:47:03.069826175 +0000 UTC m=+0.297892197 container init 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:47:03 np0005539482 podman[283177]: 2025-11-29 05:47:03.078122932 +0000 UTC m=+0.306188874 container start 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:47:03 np0005539482 podman[283177]: 2025-11-29 05:47:03.094479231 +0000 UTC m=+0.322545213 container attach 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 00:47:04 np0005539482 adoring_euclid[283194]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:47:04 np0005539482 adoring_euclid[283194]: --> relative data size: 1.0
Nov 29 00:47:04 np0005539482 adoring_euclid[283194]: --> All data devices are unavailable
Nov 29 00:47:04 np0005539482 systemd[1]: libpod-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope: Deactivated successfully.
Nov 29 00:47:04 np0005539482 podman[283177]: 2025-11-29 05:47:04.241101085 +0000 UTC m=+1.469167067 container died 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:47:04 np0005539482 systemd[1]: libpod-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope: Consumed 1.097s CPU time.
Nov 29 00:47:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:04 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9-merged.mount: Deactivated successfully.
Nov 29 00:47:04 np0005539482 podman[283177]: 2025-11-29 05:47:04.548398753 +0000 UTC m=+1.776464685 container remove 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 00:47:04 np0005539482 systemd[1]: libpod-conmon-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope: Deactivated successfully.
Nov 29 00:47:05 np0005539482 podman[283380]: 2025-11-29 05:47:05.100756712 +0000 UTC m=+0.023410488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:47:05 np0005539482 podman[283380]: 2025-11-29 05:47:05.27516128 +0000 UTC m=+0.197815036 container create e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:47:05 np0005539482 systemd[1]: Started libpod-conmon-e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2.scope.
Nov 29 00:47:05 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:47:05 np0005539482 podman[283380]: 2025-11-29 05:47:05.437188664 +0000 UTC m=+0.359842430 container init e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:47:05 np0005539482 podman[283380]: 2025-11-29 05:47:05.444323064 +0000 UTC m=+0.366976820 container start e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:47:05 np0005539482 inspiring_moser[283396]: 167 167
Nov 29 00:47:05 np0005539482 systemd[1]: libpod-e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2.scope: Deactivated successfully.
Nov 29 00:47:05 np0005539482 podman[283380]: 2025-11-29 05:47:05.572829541 +0000 UTC m=+0.495483297 container attach e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 00:47:05 np0005539482 podman[283380]: 2025-11-29 05:47:05.573420755 +0000 UTC m=+0.496074511 container died e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 00:47:05 np0005539482 systemd[1]: var-lib-containers-storage-overlay-18675f06305cb69f3e5a3793039716cc6c5b07bf585b97f2f450980426f9cb3b-merged.mount: Deactivated successfully.
Nov 29 00:47:05 np0005539482 podman[283380]: 2025-11-29 05:47:05.7401615 +0000 UTC m=+0.662815296 container remove e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:47:05 np0005539482 systemd[1]: libpod-conmon-e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2.scope: Deactivated successfully.
Nov 29 00:47:05 np0005539482 podman[283420]: 2025-11-29 05:47:05.877040456 +0000 UTC m=+0.027673749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:47:06 np0005539482 podman[283420]: 2025-11-29 05:47:06.060742336 +0000 UTC m=+0.211375599 container create 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:47:06 np0005539482 systemd[1]: Started libpod-conmon-91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c.scope.
Nov 29 00:47:06 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:47:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:06 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:06 np0005539482 podman[283420]: 2025-11-29 05:47:06.436847562 +0000 UTC m=+0.587480855 container init 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:47:06 np0005539482 podman[283420]: 2025-11-29 05:47:06.445688912 +0000 UTC m=+0.596322175 container start 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 00:47:06 np0005539482 podman[283420]: 2025-11-29 05:47:06.463010275 +0000 UTC m=+0.613643548 container attach 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:47:07 np0005539482 podman[283443]: 2025-11-29 05:47:07.015310281 +0000 UTC m=+0.068891269 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 00:47:07 np0005539482 magical_tharp[283438]: {
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:    "0": [
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:        {
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "devices": [
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "/dev/loop3"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            ],
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_name": "ceph_lv0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_size": "21470642176",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "name": "ceph_lv0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "tags": {
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cluster_name": "ceph",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.crush_device_class": "",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.encrypted": "0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osd_id": "0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.type": "block",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.vdo": "0"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            },
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "type": "block",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "vg_name": "ceph_vg0"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:        }
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:    ],
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:    "1": [
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:        {
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "devices": [
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "/dev/loop4"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            ],
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_name": "ceph_lv1",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_size": "21470642176",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "name": "ceph_lv1",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "tags": {
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cluster_name": "ceph",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.crush_device_class": "",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.encrypted": "0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osd_id": "1",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.type": "block",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.vdo": "0"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            },
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "type": "block",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "vg_name": "ceph_vg1"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:        }
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:    ],
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:    "2": [
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:        {
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "devices": [
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "/dev/loop5"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            ],
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_name": "ceph_lv2",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_size": "21470642176",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "name": "ceph_lv2",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "tags": {
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.cluster_name": "ceph",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.crush_device_class": "",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.encrypted": "0",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osd_id": "2",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.type": "block",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:                "ceph.vdo": "0"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            },
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "type": "block",
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:            "vg_name": "ceph_vg2"
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:        }
Nov 29 00:47:07 np0005539482 magical_tharp[283438]:    ]
Nov 29 00:47:07 np0005539482 magical_tharp[283438]: }
Nov 29 00:47:07 np0005539482 systemd[1]: libpod-91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c.scope: Deactivated successfully.
Nov 29 00:47:07 np0005539482 podman[283420]: 2025-11-29 05:47:07.294242576 +0000 UTC m=+1.444875839 container died 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:47:07 np0005539482 systemd[1]: var-lib-containers-storage-overlay-66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42-merged.mount: Deactivated successfully.
Nov 29 00:47:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:07 np0005539482 podman[283420]: 2025-11-29 05:47:07.783733979 +0000 UTC m=+1.934367242 container remove 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:47:07 np0005539482 systemd[1]: libpod-conmon-91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c.scope: Deactivated successfully.
Nov 29 00:47:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:08 np0005539482 podman[283623]: 2025-11-29 05:47:08.592520057 +0000 UTC m=+0.108473842 container create f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:47:08 np0005539482 podman[283623]: 2025-11-29 05:47:08.507253039 +0000 UTC m=+0.023206844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:47:08 np0005539482 systemd[1]: Started libpod-conmon-f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230.scope.
Nov 29 00:47:08 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:47:08 np0005539482 podman[283623]: 2025-11-29 05:47:08.701630872 +0000 UTC m=+0.217584667 container init f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 00:47:08 np0005539482 podman[283623]: 2025-11-29 05:47:08.709708164 +0000 UTC m=+0.225661949 container start f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:47:08 np0005539482 podman[283623]: 2025-11-29 05:47:08.713426432 +0000 UTC m=+0.229380217 container attach f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:47:08 np0005539482 elastic_lalande[283640]: 167 167
Nov 29 00:47:08 np0005539482 systemd[1]: libpod-f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230.scope: Deactivated successfully.
Nov 29 00:47:08 np0005539482 podman[283645]: 2025-11-29 05:47:08.75243643 +0000 UTC m=+0.022635189 container died f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:47:08 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7ab848df72037c439cac36798045516ae79cbace7408e6d72832432827cc4be1-merged.mount: Deactivated successfully.
Nov 29 00:47:08 np0005539482 podman[283645]: 2025-11-29 05:47:08.796050228 +0000 UTC m=+0.066248977 container remove f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:47:08 np0005539482 systemd[1]: libpod-conmon-f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230.scope: Deactivated successfully.
Nov 29 00:47:09 np0005539482 podman[283667]: 2025-11-29 05:47:08.953652807 +0000 UTC m=+0.026268586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:47:09 np0005539482 podman[283667]: 2025-11-29 05:47:09.722686858 +0000 UTC m=+0.795302607 container create 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:47:09 np0005539482 systemd[1]: Started libpod-conmon-03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a.scope.
Nov 29 00:47:09 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:47:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:09 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:47:09 np0005539482 podman[283667]: 2025-11-29 05:47:09.824033959 +0000 UTC m=+0.896649708 container init 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:47:09 np0005539482 podman[283667]: 2025-11-29 05:47:09.833026863 +0000 UTC m=+0.905642612 container start 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:47:09 np0005539482 podman[283667]: 2025-11-29 05:47:09.837717614 +0000 UTC m=+0.910333383 container attach 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:47:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:10 np0005539482 clever_fermat[283684]: {
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "osd_id": 0,
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "type": "bluestore"
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:    },
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "osd_id": 1,
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "type": "bluestore"
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:    },
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "osd_id": 2,
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:        "type": "bluestore"
Nov 29 00:47:10 np0005539482 clever_fermat[283684]:    }
Nov 29 00:47:10 np0005539482 clever_fermat[283684]: }
Nov 29 00:47:10 np0005539482 systemd[1]: libpod-03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a.scope: Deactivated successfully.
Nov 29 00:47:10 np0005539482 podman[283667]: 2025-11-29 05:47:10.762397948 +0000 UTC m=+1.835013727 container died 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:47:11 np0005539482 systemd[1]: var-lib-containers-storage-overlay-684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c-merged.mount: Deactivated successfully.
Nov 29 00:47:11 np0005539482 podman[283667]: 2025-11-29 05:47:11.245601801 +0000 UTC m=+2.318217550 container remove 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:47:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:47:11 np0005539482 systemd[1]: libpod-conmon-03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a.scope: Deactivated successfully.
Nov 29 00:47:11 np0005539482 podman[283729]: 2025-11-29 05:47:11.340248933 +0000 UTC m=+0.150431459 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:47:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:47:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:47:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:47:11 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 1726d3a1-7af1-46d0-bf32-c98c8262fda3 does not exist
Nov 29 00:47:11 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 43d31791-ec59-41df-bfb2-4ae21d53b348 does not exist
Nov 29 00:47:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:47:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:47:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:47:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:47:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:47:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:47:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:12 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:47:12 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:47:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:47:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:47:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:47:13.762 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:47:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:47:13.762 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:47:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:47:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959059243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:47:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:47:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959059243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:47:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:21 np0005539482 podman[283805]: 2025-11-29 05:47:21.999054369 +0000 UTC m=+0.049571720 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 00:47:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:38 np0005539482 podman[283828]: 2025-11-29 05:47:38.044747577 +0000 UTC m=+0.082531304 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 00:47:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:40 np0005539482 nova_compute[254898]: 2025-11-29 05:47:40.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:47:41
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'vms', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:47:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:47:41 np0005539482 nova_compute[254898]: 2025-11-29 05:47:41.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:47:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:47:42 np0005539482 podman[283850]: 2025-11-29 05:47:42.057752758 +0000 UTC m=+0.100739117 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 00:47:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:43 np0005539482 nova_compute[254898]: 2025-11-29 05:47:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:44 np0005539482 nova_compute[254898]: 2025-11-29 05:47:44.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:44 np0005539482 nova_compute[254898]: 2025-11-29 05:47:44.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:44 np0005539482 nova_compute[254898]: 2025-11-29 05:47:44.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:44 np0005539482 nova_compute[254898]: 2025-11-29 05:47:44.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:47:44 np0005539482 nova_compute[254898]: 2025-11-29 05:47:44.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.021 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:47:45 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:47:45 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422408111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.489 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.629 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.630 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.631 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.631 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.860 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.860 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:47:45 np0005539482 nova_compute[254898]: 2025-11-29 05:47:45.879 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:47:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:47:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936850596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:47:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:46 np0005539482 nova_compute[254898]: 2025-11-29 05:47:46.273 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:47:46 np0005539482 nova_compute[254898]: 2025-11-29 05:47:46.278 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:47:46 np0005539482 nova_compute[254898]: 2025-11-29 05:47:46.378 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:47:46 np0005539482 nova_compute[254898]: 2025-11-29 05:47:46.382 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:47:46 np0005539482 nova_compute[254898]: 2025-11-29 05:47:46.383 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:47:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:50 np0005539482 nova_compute[254898]: 2025-11-29 05:47:50.380 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:50 np0005539482 nova_compute[254898]: 2025-11-29 05:47:50.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:47:50 np0005539482 nova_compute[254898]: 2025-11-29 05:47:50.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:47:50 np0005539482 nova_compute[254898]: 2025-11-29 05:47:50.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:47:51 np0005539482 nova_compute[254898]: 2025-11-29 05:47:51.177 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:47:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:47:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:52 np0005539482 podman[283921]: 2025-11-29 05:47:52.998482522 +0000 UTC m=+0.050037312 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 00:47:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:47:54 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6526 writes, 30K keys, 6526 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6526 writes, 6526 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1715 writes, 8415 keys, 1715 commit groups, 1.0 writes per commit group, ingest: 10.60 MB, 0.02 MB/s#012Interval WAL: 1715 writes, 1715 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    110.5      0.30              0.13        16    0.019       0      0       0.0       0.0#012  L6      1/0    8.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    151.1    123.9      0.92              0.42        15    0.061     72K   8391       0.0       0.0#012 Sum      1/0    8.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    113.7    120.6      1.22              0.56        31    0.039     72K   8391       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1    125.0    127.9      0.35              0.16         8    0.044     24K   2605       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    151.1    123.9      0.92              0.42        15    0.061     72K   8391       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    111.1      0.30              0.13        15    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 1.2 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 304.00 MB usage: 16.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000234 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1249,15.50 MB,5.09985%) FilterBlock(32,213.36 KB,0.0685391%) IndexBlock(32,386.75 KB,0.124239%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 00:47:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:47:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:47:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:09 np0005539482 podman[283941]: 2025-11-29 05:48:09.079457899 +0000 UTC m=+0.132910872 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 00:48:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:48:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:48:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:48:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:48:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:48:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:48:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:48:12 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev cbec915b-f65a-46c2-b015-dff53fe8606b does not exist
Nov 29 00:48:12 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a7c94ba6-f09e-4eca-b63a-1832bddd4398 does not exist
Nov 29 00:48:12 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev b7218b4b-bba6-41f2-a5d7-ef694b233edc does not exist
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:48:12 np0005539482 podman[284116]: 2025-11-29 05:48:12.515097418 +0000 UTC m=+0.085894824 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 00:48:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:13 np0005539482 podman[284258]: 2025-11-29 05:48:13.000375431 +0000 UTC m=+0.058571405 container create 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:48:13 np0005539482 systemd[1]: Started libpod-conmon-15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657.scope.
Nov 29 00:48:13 np0005539482 podman[284258]: 2025-11-29 05:48:12.966896814 +0000 UTC m=+0.025092838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:48:13 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:48:13 np0005539482 podman[284258]: 2025-11-29 05:48:13.113121132 +0000 UTC m=+0.171317176 container init 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:48:13 np0005539482 podman[284258]: 2025-11-29 05:48:13.12144057 +0000 UTC m=+0.179636504 container start 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:48:13 np0005539482 podman[284258]: 2025-11-29 05:48:13.124910723 +0000 UTC m=+0.183106767 container attach 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 00:48:13 np0005539482 boring_mestorf[284274]: 167 167
Nov 29 00:48:13 np0005539482 systemd[1]: libpod-15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657.scope: Deactivated successfully.
Nov 29 00:48:13 np0005539482 podman[284258]: 2025-11-29 05:48:13.130119266 +0000 UTC m=+0.188315220 container died 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 00:48:13 np0005539482 systemd[1]: var-lib-containers-storage-overlay-79e95849fc2762a83b4688c5f478fcc9c880ce052bbb62c47fd184d5346a78a6-merged.mount: Deactivated successfully.
Nov 29 00:48:13 np0005539482 podman[284258]: 2025-11-29 05:48:13.181000707 +0000 UTC m=+0.239196661 container remove 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:48:13 np0005539482 systemd[1]: libpod-conmon-15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657.scope: Deactivated successfully.
Nov 29 00:48:13 np0005539482 podman[284299]: 2025-11-29 05:48:13.409217595 +0000 UTC m=+0.042070912 container create ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:48:13 np0005539482 systemd[1]: Started libpod-conmon-ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d.scope.
Nov 29 00:48:13 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:48:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:13 np0005539482 podman[284299]: 2025-11-29 05:48:13.393609953 +0000 UTC m=+0.026463290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:48:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:13 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:13 np0005539482 podman[284299]: 2025-11-29 05:48:13.501663474 +0000 UTC m=+0.134516821 container init ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:48:13 np0005539482 podman[284299]: 2025-11-29 05:48:13.510060873 +0000 UTC m=+0.142914190 container start ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 00:48:13 np0005539482 podman[284299]: 2025-11-29 05:48:13.513523405 +0000 UTC m=+0.146376722 container attach ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:48:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:48:13.762 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:48:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:48:13.763 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:48:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:48:13.763 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:48:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:48:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951864936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:48:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:48:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951864936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:48:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:14 np0005539482 goofy_sutherland[284315]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:48:14 np0005539482 goofy_sutherland[284315]: --> relative data size: 1.0
Nov 29 00:48:14 np0005539482 goofy_sutherland[284315]: --> All data devices are unavailable
Nov 29 00:48:14 np0005539482 systemd[1]: libpod-ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d.scope: Deactivated successfully.
Nov 29 00:48:14 np0005539482 podman[284299]: 2025-11-29 05:48:14.501671139 +0000 UTC m=+1.134524456 container died ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:48:14 np0005539482 systemd[1]: var-lib-containers-storage-overlay-61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c-merged.mount: Deactivated successfully.
Nov 29 00:48:14 np0005539482 podman[284299]: 2025-11-29 05:48:14.554796643 +0000 UTC m=+1.187649960 container remove ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:48:14 np0005539482 systemd[1]: libpod-conmon-ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d.scope: Deactivated successfully.
Nov 29 00:48:15 np0005539482 podman[284495]: 2025-11-29 05:48:15.174464312 +0000 UTC m=+0.039115512 container create b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:48:15 np0005539482 systemd[1]: Started libpod-conmon-b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63.scope.
Nov 29 00:48:15 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:48:15 np0005539482 podman[284495]: 2025-11-29 05:48:15.233422734 +0000 UTC m=+0.098073934 container init b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:48:15 np0005539482 podman[284495]: 2025-11-29 05:48:15.239756985 +0000 UTC m=+0.104408195 container start b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:48:15 np0005539482 podman[284495]: 2025-11-29 05:48:15.242508211 +0000 UTC m=+0.107159441 container attach b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:48:15 np0005539482 naughty_hoover[284511]: 167 167
Nov 29 00:48:15 np0005539482 systemd[1]: libpod-b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63.scope: Deactivated successfully.
Nov 29 00:48:15 np0005539482 podman[284495]: 2025-11-29 05:48:15.158785339 +0000 UTC m=+0.023436559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:48:15 np0005539482 podman[284516]: 2025-11-29 05:48:15.282177104 +0000 UTC m=+0.023012538 container died b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:48:15 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f924cb01da699a19b222a64a916d2d68450310dc6dd1580b339088eee29a5da8-merged.mount: Deactivated successfully.
Nov 29 00:48:15 np0005539482 podman[284516]: 2025-11-29 05:48:15.314340049 +0000 UTC m=+0.055175483 container remove b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:48:15 np0005539482 systemd[1]: libpod-conmon-b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63.scope: Deactivated successfully.
Nov 29 00:48:15 np0005539482 podman[284538]: 2025-11-29 05:48:15.471074647 +0000 UTC m=+0.043557267 container create fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 00:48:15 np0005539482 systemd[1]: Started libpod-conmon-fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35.scope.
Nov 29 00:48:15 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:48:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:15 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:15 np0005539482 podman[284538]: 2025-11-29 05:48:15.450843606 +0000 UTC m=+0.023326256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:48:15 np0005539482 podman[284538]: 2025-11-29 05:48:15.55189435 +0000 UTC m=+0.124377000 container init fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:48:15 np0005539482 podman[284538]: 2025-11-29 05:48:15.557163305 +0000 UTC m=+0.129645925 container start fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:48:15 np0005539482 podman[284538]: 2025-11-29 05:48:15.560600317 +0000 UTC m=+0.133082937 container attach fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:48:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:16 np0005539482 bold_brattain[284555]: {
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:    "0": [
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:        {
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "devices": [
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "/dev/loop3"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            ],
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_name": "ceph_lv0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_size": "21470642176",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "name": "ceph_lv0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "tags": {
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cluster_name": "ceph",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.crush_device_class": "",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.encrypted": "0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osd_id": "0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.type": "block",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.vdo": "0"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            },
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "type": "block",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "vg_name": "ceph_vg0"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:        }
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:    ],
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:    "1": [
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:        {
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "devices": [
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "/dev/loop4"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            ],
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_name": "ceph_lv1",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_size": "21470642176",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "name": "ceph_lv1",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "tags": {
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cluster_name": "ceph",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.crush_device_class": "",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.encrypted": "0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osd_id": "1",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.type": "block",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.vdo": "0"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            },
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "type": "block",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "vg_name": "ceph_vg1"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:        }
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:    ],
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:    "2": [
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:        {
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "devices": [
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "/dev/loop5"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            ],
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_name": "ceph_lv2",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_size": "21470642176",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "name": "ceph_lv2",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "tags": {
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.cluster_name": "ceph",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.crush_device_class": "",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.encrypted": "0",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osd_id": "2",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.type": "block",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:                "ceph.vdo": "0"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            },
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "type": "block",
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:            "vg_name": "ceph_vg2"
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:        }
Nov 29 00:48:16 np0005539482 bold_brattain[284555]:    ]
Nov 29 00:48:16 np0005539482 bold_brattain[284555]: }
Nov 29 00:48:16 np0005539482 systemd[1]: libpod-fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35.scope: Deactivated successfully.
Nov 29 00:48:16 np0005539482 podman[284538]: 2025-11-29 05:48:16.339384806 +0000 UTC m=+0.911867446 container died fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:48:16 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb-merged.mount: Deactivated successfully.
Nov 29 00:48:16 np0005539482 podman[284538]: 2025-11-29 05:48:16.395832803 +0000 UTC m=+0.968315443 container remove fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 00:48:16 np0005539482 systemd[1]: libpod-conmon-fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35.scope: Deactivated successfully.
Nov 29 00:48:17 np0005539482 podman[284719]: 2025-11-29 05:48:17.013349028 +0000 UTC m=+0.036860617 container create 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:48:17 np0005539482 systemd[1]: Started libpod-conmon-73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4.scope.
Nov 29 00:48:17 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:48:17 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:48:17 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:48:17 np0005539482 podman[284719]: 2025-11-29 05:48:17.081834024 +0000 UTC m=+0.105345643 container init 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 00:48:17 np0005539482 podman[284719]: 2025-11-29 05:48:17.088848392 +0000 UTC m=+0.112359981 container start 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:48:17 np0005539482 podman[284719]: 2025-11-29 05:48:17.092360786 +0000 UTC m=+0.115872375 container attach 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 00:48:17 np0005539482 optimistic_rosalind[284735]: 167 167
Nov 29 00:48:17 np0005539482 podman[284719]: 2025-11-29 05:48:16.997718583 +0000 UTC m=+0.021230182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:48:17 np0005539482 systemd[1]: libpod-73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4.scope: Deactivated successfully.
Nov 29 00:48:17 np0005539482 podman[284719]: 2025-11-29 05:48:17.093467523 +0000 UTC m=+0.116979122 container died 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 00:48:17 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f879c967adf5c21608aa88c0a76aae690e5bee9b0f72eb451030bb03a7fc4ef1-merged.mount: Deactivated successfully.
Nov 29 00:48:17 np0005539482 podman[284719]: 2025-11-29 05:48:17.125641546 +0000 UTC m=+0.149153125 container remove 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:48:17 np0005539482 systemd[1]: libpod-conmon-73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4.scope: Deactivated successfully.
Nov 29 00:48:17 np0005539482 podman[284761]: 2025-11-29 05:48:17.261478909 +0000 UTC m=+0.033715621 container create 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 00:48:17 np0005539482 systemd[1]: Started libpod-conmon-2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12.scope.
Nov 29 00:48:17 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:48:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:17 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:48:17 np0005539482 podman[284761]: 2025-11-29 05:48:17.246666233 +0000 UTC m=+0.018902965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:48:17 np0005539482 podman[284761]: 2025-11-29 05:48:17.349136316 +0000 UTC m=+0.121373058 container init 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:48:17 np0005539482 podman[284761]: 2025-11-29 05:48:17.362513186 +0000 UTC m=+0.134749898 container start 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:48:17 np0005539482 podman[284761]: 2025-11-29 05:48:17.366163984 +0000 UTC m=+0.138400696 container attach 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:48:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]: {
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "osd_id": 0,
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "type": "bluestore"
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:    },
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "osd_id": 1,
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "type": "bluestore"
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:    },
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "osd_id": 2,
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:        "type": "bluestore"
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]:    }
Nov 29 00:48:18 np0005539482 stoic_antonelli[284777]: }
Nov 29 00:48:18 np0005539482 systemd[1]: libpod-2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12.scope: Deactivated successfully.
Nov 29 00:48:18 np0005539482 podman[284761]: 2025-11-29 05:48:18.333218687 +0000 UTC m=+1.105455399 container died 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:48:18 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a-merged.mount: Deactivated successfully.
Nov 29 00:48:18 np0005539482 podman[284761]: 2025-11-29 05:48:18.407636755 +0000 UTC m=+1.179873507 container remove 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 00:48:18 np0005539482 systemd[1]: libpod-conmon-2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12.scope: Deactivated successfully.
Nov 29 00:48:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:48:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:48:18 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:48:18 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:48:18 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 3eceb939-9325-4486-b7fa-89a5f4744b9e does not exist
Nov 29 00:48:18 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev f9384723-9722-42f8-a3e9-ea77441b9bb0 does not exist
Nov 29 00:48:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:48:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:48:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:24 np0005539482 podman[284873]: 2025-11-29 05:48:24.018127412 +0000 UTC m=+0.065053754 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 00:48:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:40 np0005539482 podman[284896]: 2025-11-29 05:48:40.010362332 +0000 UTC m=+0.062796269 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 00:48:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:40 np0005539482 nova_compute[254898]: 2025-11-29 05:48:40.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:48:41
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'backups']
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:48:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:48:41 np0005539482 nova_compute[254898]: 2025-11-29 05:48:41.974 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:41 np0005539482 nova_compute[254898]: 2025-11-29 05:48:41.986 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:48:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:48:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:43 np0005539482 podman[284916]: 2025-11-29 05:48:43.04561844 +0000 UTC m=+0.104196014 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 00:48:43 np0005539482 nova_compute[254898]: 2025-11-29 05:48:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:44 np0005539482 nova_compute[254898]: 2025-11-29 05:48:44.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:44 np0005539482 nova_compute[254898]: 2025-11-29 05:48:44.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:48:45 np0005539482 nova_compute[254898]: 2025-11-29 05:48:45.987 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:48:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:46 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:48:46 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136831747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:48:46 np0005539482 nova_compute[254898]: 2025-11-29 05:48:46.420 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:48:46 np0005539482 nova_compute[254898]: 2025-11-29 05:48:46.565 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:48:46 np0005539482 nova_compute[254898]: 2025-11-29 05:48:46.566 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4980MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:48:46 np0005539482 nova_compute[254898]: 2025-11-29 05:48:46.567 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:48:46 np0005539482 nova_compute[254898]: 2025-11-29 05:48:46.567 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:48:46 np0005539482 nova_compute[254898]: 2025-11-29 05:48:46.925 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:48:46 np0005539482 nova_compute[254898]: 2025-11-29 05:48:46.925 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.014 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.119 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.120 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.131 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.153 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.166 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:48:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:48:47 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457357074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.556 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.562 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.577 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.578 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.578 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:48:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:47 np0005539482 nova_compute[254898]: 2025-11-29 05:48:47.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 00:48:48 np0005539482 nova_compute[254898]: 2025-11-29 05:48:48.268 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:50 np0005539482 nova_compute[254898]: 2025-11-29 05:48:50.972 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:50 np0005539482 nova_compute[254898]: 2025-11-29 05:48:50.972 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:50 np0005539482 nova_compute[254898]: 2025-11-29 05:48:50.973 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:48:50 np0005539482 nova_compute[254898]: 2025-11-29 05:48:50.973 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:48:50 np0005539482 nova_compute[254898]: 2025-11-29 05:48:50.989 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:48:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:48:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:52 np0005539482 nova_compute[254898]: 2025-11-29 05:48:52.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:48:52 np0005539482 nova_compute[254898]: 2025-11-29 05:48:52.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 00:48:52 np0005539482 nova_compute[254898]: 2025-11-29 05:48:52.978 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 00:48:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:55 np0005539482 podman[284986]: 2025-11-29 05:48:55.008397678 +0000 UTC m=+0.054239763 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 00:48:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:48:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:48:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.399390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344399502, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2408, "num_deletes": 507, "total_data_size": 3489987, "memory_usage": 3548544, "flush_reason": "Manual Compaction"}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344428389, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3433848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28629, "largest_seqno": 31036, "table_properties": {"data_size": 3423144, "index_size": 6366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25736, "raw_average_key_size": 19, "raw_value_size": 3399380, "raw_average_value_size": 2625, "num_data_blocks": 281, "num_entries": 1295, "num_filter_entries": 1295, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395117, "oldest_key_time": 1764395117, "file_creation_time": 1764395344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 29049 microseconds, and 12452 cpu microseconds.
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.428448) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3433848 bytes OK
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.428471) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.433910) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.433923) EVENT_LOG_v1 {"time_micros": 1764395344433919, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.433940) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3478648, prev total WAL file size 3478648, number of live WAL files 2.
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.434805) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3353KB)], [62(8607KB)]
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344434837, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12248009, "oldest_snapshot_seqno": -1}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6118 keys, 10410175 bytes, temperature: kUnknown
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344502373, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10410175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10366875, "index_size": 26934, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 154209, "raw_average_key_size": 25, "raw_value_size": 10254792, "raw_average_value_size": 1676, "num_data_blocks": 1100, "num_entries": 6118, "num_filter_entries": 6118, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.502594) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10410175 bytes
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.504506) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.2 rd, 154.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 7148, records dropped: 1030 output_compression: NoCompression
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.504521) EVENT_LOG_v1 {"time_micros": 1764395344504514, "job": 34, "event": "compaction_finished", "compaction_time_micros": 67604, "compaction_time_cpu_micros": 20410, "output_level": 6, "num_output_files": 1, "total_output_size": 10410175, "num_input_records": 7148, "num_output_records": 6118, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344505122, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344506625, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.434709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:49:04 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:49:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:11 np0005539482 podman[285006]: 2025-11-29 05:49:11.009100843 +0000 UTC m=+0.054221102 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 00:49:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:49:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:49:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:49:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:49:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:49:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:49:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:49:13.764 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:49:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:49:13.764 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:49:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:49:13.765 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:49:14 np0005539482 podman[285026]: 2025-11-29 05:49:14.053376592 +0000 UTC m=+0.104604094 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 00:49:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:49:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874808858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:49:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:49:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874808858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:49:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:19 np0005539482 nova_compute[254898]: 2025-11-29 05:49:19.244 254902 DEBUG oslo_concurrency.processutils [None req-677f7328-0038-4310-a215-6b6c196af2d2 da42e74ed6d04223b9f1be411e89508b 389b14b74e3c4a1184dca228ba013067 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:49:19 np0005539482 nova_compute[254898]: 2025-11-29 05:49:19.274 254902 DEBUG oslo_concurrency.processutils [None req-677f7328-0038-4310-a215-6b6c196af2d2 da42e74ed6d04223b9f1be411e89508b 389b14b74e3c4a1184dca228ba013067 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:49:19 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 08dfde94-1a41-4539-b869-df98d45e93bc does not exist
Nov 29 00:49:19 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 0bb3d3a0-393a-4b64-bf45-4b4d68c95067 does not exist
Nov 29 00:49:19 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev b7d84775-41a8-418e-993f-b4df089a24cf does not exist
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:49:19 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:49:19 np0005539482 podman[285326]: 2025-11-29 05:49:19.912142234 +0000 UTC m=+0.039919360 container create 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 00:49:19 np0005539482 systemd[1]: Started libpod-conmon-8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25.scope.
Nov 29 00:49:19 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:49:19 np0005539482 podman[285326]: 2025-11-29 05:49:19.894564192 +0000 UTC m=+0.022341348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:49:20 np0005539482 podman[285326]: 2025-11-29 05:49:20.001647835 +0000 UTC m=+0.129424991 container init 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:49:20 np0005539482 podman[285326]: 2025-11-29 05:49:20.007943425 +0000 UTC m=+0.135720541 container start 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:49:20 np0005539482 podman[285326]: 2025-11-29 05:49:20.01101706 +0000 UTC m=+0.138794216 container attach 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:49:20 np0005539482 determined_nightingale[285342]: 167 167
Nov 29 00:49:20 np0005539482 systemd[1]: libpod-8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25.scope: Deactivated successfully.
Nov 29 00:49:20 np0005539482 podman[285326]: 2025-11-29 05:49:20.015546619 +0000 UTC m=+0.143323745 container died 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:49:20 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b236aead69a0c62144312497722f29f7a3ed6ba32e34b74efaf9220ef53ba3fc-merged.mount: Deactivated successfully.
Nov 29 00:49:20 np0005539482 podman[285326]: 2025-11-29 05:49:20.058666945 +0000 UTC m=+0.186444071 container remove 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 00:49:20 np0005539482 systemd[1]: libpod-conmon-8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25.scope: Deactivated successfully.
Nov 29 00:49:20 np0005539482 podman[285368]: 2025-11-29 05:49:20.220629195 +0000 UTC m=+0.043885195 container create 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:49:20 np0005539482 systemd[1]: Started libpod-conmon-255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391.scope.
Nov 29 00:49:20 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:49:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:20 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:20 np0005539482 podman[285368]: 2025-11-29 05:49:20.201867665 +0000 UTC m=+0.025123655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:49:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:20 np0005539482 podman[285368]: 2025-11-29 05:49:20.306715054 +0000 UTC m=+0.129971044 container init 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:49:20 np0005539482 podman[285368]: 2025-11-29 05:49:20.315096765 +0000 UTC m=+0.138352735 container start 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:49:20 np0005539482 podman[285368]: 2025-11-29 05:49:20.317530103 +0000 UTC m=+0.140786103 container attach 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:49:21 np0005539482 compassionate_shockley[285385]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:49:21 np0005539482 compassionate_shockley[285385]: --> relative data size: 1.0
Nov 29 00:49:21 np0005539482 compassionate_shockley[285385]: --> All data devices are unavailable
Nov 29 00:49:21 np0005539482 systemd[1]: libpod-255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391.scope: Deactivated successfully.
Nov 29 00:49:21 np0005539482 podman[285368]: 2025-11-29 05:49:21.325858008 +0000 UTC m=+1.149113988 container died 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:49:21 np0005539482 systemd[1]: var-lib-containers-storage-overlay-66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095-merged.mount: Deactivated successfully.
Nov 29 00:49:21 np0005539482 podman[285368]: 2025-11-29 05:49:21.374579728 +0000 UTC m=+1.197835698 container remove 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:49:21 np0005539482 systemd[1]: libpod-conmon-255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391.scope: Deactivated successfully.
Nov 29 00:49:22 np0005539482 podman[285568]: 2025-11-29 05:49:22.084959405 +0000 UTC m=+0.039223883 container create 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:49:22 np0005539482 systemd[1]: Started libpod-conmon-287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4.scope.
Nov 29 00:49:22 np0005539482 podman[285568]: 2025-11-29 05:49:22.068469599 +0000 UTC m=+0.022734087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:49:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:49:22 np0005539482 podman[285568]: 2025-11-29 05:49:22.201768851 +0000 UTC m=+0.156033339 container init 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 00:49:22 np0005539482 podman[285568]: 2025-11-29 05:49:22.209962878 +0000 UTC m=+0.164227346 container start 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:49:22 np0005539482 podman[285568]: 2025-11-29 05:49:22.212947019 +0000 UTC m=+0.167211577 container attach 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 00:49:22 np0005539482 admiring_dijkstra[285585]: 167 167
Nov 29 00:49:22 np0005539482 systemd[1]: libpod-287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4.scope: Deactivated successfully.
Nov 29 00:49:22 np0005539482 podman[285568]: 2025-11-29 05:49:22.215947931 +0000 UTC m=+0.170212409 container died 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:49:22 np0005539482 systemd[1]: var-lib-containers-storage-overlay-bf7e365b10ee122af41c62c1916df318057e991a7834c3c12a4ca72789422882-merged.mount: Deactivated successfully.
Nov 29 00:49:22 np0005539482 podman[285568]: 2025-11-29 05:49:22.24837504 +0000 UTC m=+0.202639508 container remove 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 00:49:22 np0005539482 systemd[1]: libpod-conmon-287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4.scope: Deactivated successfully.
Nov 29 00:49:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:22 np0005539482 podman[285609]: 2025-11-29 05:49:22.412522844 +0000 UTC m=+0.044864458 container create f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:49:22 np0005539482 systemd[1]: Started libpod-conmon-f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198.scope.
Nov 29 00:49:22 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:49:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:22 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:22 np0005539482 podman[285609]: 2025-11-29 05:49:22.39151563 +0000 UTC m=+0.023857294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:49:22 np0005539482 podman[285609]: 2025-11-29 05:49:22.495897337 +0000 UTC m=+0.128238961 container init f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:49:22 np0005539482 podman[285609]: 2025-11-29 05:49:22.504616697 +0000 UTC m=+0.136958311 container start f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:49:22 np0005539482 podman[285609]: 2025-11-29 05:49:22.507565138 +0000 UTC m=+0.139906772 container attach f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 00:49:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:23 np0005539482 awesome_jones[285626]: {
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:    "0": [
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:        {
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "devices": [
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "/dev/loop3"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            ],
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_name": "ceph_lv0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_size": "21470642176",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "name": "ceph_lv0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "tags": {
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cluster_name": "ceph",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.crush_device_class": "",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.encrypted": "0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osd_id": "0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.type": "block",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.vdo": "0"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            },
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "type": "block",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "vg_name": "ceph_vg0"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:        }
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:    ],
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:    "1": [
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:        {
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "devices": [
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "/dev/loop4"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            ],
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_name": "ceph_lv1",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_size": "21470642176",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "name": "ceph_lv1",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "tags": {
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cluster_name": "ceph",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.crush_device_class": "",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.encrypted": "0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osd_id": "1",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.type": "block",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.vdo": "0"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            },
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "type": "block",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "vg_name": "ceph_vg1"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:        }
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:    ],
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:    "2": [
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:        {
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "devices": [
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "/dev/loop5"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            ],
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_name": "ceph_lv2",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_size": "21470642176",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "name": "ceph_lv2",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "tags": {
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.cluster_name": "ceph",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.crush_device_class": "",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.encrypted": "0",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osd_id": "2",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.type": "block",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:                "ceph.vdo": "0"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            },
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "type": "block",
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:            "vg_name": "ceph_vg2"
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:        }
Nov 29 00:49:23 np0005539482 awesome_jones[285626]:    ]
Nov 29 00:49:23 np0005539482 awesome_jones[285626]: }
Nov 29 00:49:23 np0005539482 systemd[1]: libpod-f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198.scope: Deactivated successfully.
Nov 29 00:49:23 np0005539482 podman[285609]: 2025-11-29 05:49:23.329143345 +0000 UTC m=+0.961484969 container died f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:49:23 np0005539482 systemd[1]: var-lib-containers-storage-overlay-78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57-merged.mount: Deactivated successfully.
Nov 29 00:49:23 np0005539482 podman[285609]: 2025-11-29 05:49:23.378601264 +0000 UTC m=+1.010942878 container remove f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:49:23 np0005539482 systemd[1]: libpod-conmon-f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198.scope: Deactivated successfully.
Nov 29 00:49:24 np0005539482 podman[285790]: 2025-11-29 05:49:24.042020572 +0000 UTC m=+0.046300654 container create d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:49:24 np0005539482 systemd[1]: Started libpod-conmon-d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8.scope.
Nov 29 00:49:24 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:49:24 np0005539482 podman[285790]: 2025-11-29 05:49:24.024647924 +0000 UTC m=+0.028928056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:49:24 np0005539482 podman[285790]: 2025-11-29 05:49:24.125891906 +0000 UTC m=+0.130172028 container init d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:49:24 np0005539482 podman[285790]: 2025-11-29 05:49:24.138327135 +0000 UTC m=+0.142607227 container start d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:49:24 np0005539482 podman[285790]: 2025-11-29 05:49:24.141197064 +0000 UTC m=+0.145477246 container attach d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 29 00:49:24 np0005539482 modest_goldwasser[285806]: 167 167
Nov 29 00:49:24 np0005539482 systemd[1]: libpod-d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8.scope: Deactivated successfully.
Nov 29 00:49:24 np0005539482 podman[285790]: 2025-11-29 05:49:24.145156729 +0000 UTC m=+0.149436821 container died d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:49:24 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a7be8ef5d4becb8e7b4483acb0a787332c8a3bac46dbba8163cc35afb5a366ce-merged.mount: Deactivated successfully.
Nov 29 00:49:24 np0005539482 podman[285790]: 2025-11-29 05:49:24.186067262 +0000 UTC m=+0.190347344 container remove d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:49:24 np0005539482 systemd[1]: libpod-conmon-d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8.scope: Deactivated successfully.
Nov 29 00:49:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:24 np0005539482 podman[285828]: 2025-11-29 05:49:24.349865177 +0000 UTC m=+0.046198851 container create 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:49:24 np0005539482 systemd[1]: Started libpod-conmon-84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41.scope.
Nov 29 00:49:24 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:49:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:24 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:49:24 np0005539482 podman[285828]: 2025-11-29 05:49:24.328636817 +0000 UTC m=+0.024970511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:49:24 np0005539482 podman[285828]: 2025-11-29 05:49:24.433384104 +0000 UTC m=+0.129717798 container init 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:49:24 np0005539482 podman[285828]: 2025-11-29 05:49:24.440871223 +0000 UTC m=+0.137204897 container start 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 00:49:24 np0005539482 podman[285828]: 2025-11-29 05:49:24.44444575 +0000 UTC m=+0.140779424 container attach 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]: {
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "osd_id": 0,
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "type": "bluestore"
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:    },
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "osd_id": 1,
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "type": "bluestore"
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:    },
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "osd_id": 2,
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:        "type": "bluestore"
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]:    }
Nov 29 00:49:25 np0005539482 naughty_wilson[285844]: }
Nov 29 00:49:25 np0005539482 systemd[1]: libpod-84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41.scope: Deactivated successfully.
Nov 29 00:49:25 np0005539482 podman[285828]: 2025-11-29 05:49:25.431177826 +0000 UTC m=+1.127511500 container died 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:49:25 np0005539482 systemd[1]: var-lib-containers-storage-overlay-29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7-merged.mount: Deactivated successfully.
Nov 29 00:49:25 np0005539482 podman[285828]: 2025-11-29 05:49:25.489050425 +0000 UTC m=+1.185384099 container remove 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:49:25 np0005539482 systemd[1]: libpod-conmon-84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41.scope: Deactivated successfully.
Nov 29 00:49:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:49:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:49:25 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:49:25 np0005539482 podman[285878]: 2025-11-29 05:49:25.538428702 +0000 UTC m=+0.067402961 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:49:25 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:49:25 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 84ae6f4e-32a9-4929-8ffd-7bf327f3c1e3 does not exist
Nov 29 00:49:25 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev fcc4f5ba-84f0-450e-9ba7-5445fc954b5f does not exist
Nov 29 00:49:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:26 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:49:26 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:49:27 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:49:27.228 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 00:49:27 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:49:27.229 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 00:49:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:28 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:49:28.231 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 00:49:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:49:37 np0005539482 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 9347 writes, 33K keys, 9347 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9347 writes, 2355 syncs, 3.97 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2116 writes, 5839 keys, 2116 commit groups, 1.0 writes per commit group, ingest: 7.88 MB, 0.01 MB/s#012Interval WAL: 2116 writes, 782 syncs, 2.71 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:49:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:49:41
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'images']
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:49:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:49:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:49:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:49:42 np0005539482 podman[285959]: 2025-11-29 05:49:42.037842656 +0000 UTC m=+0.091642022 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:49:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:49:42 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 14K writes, 52K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4177 syncs, 3.37 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3169 writes, 9859 keys, 3169 commit groups, 1.0 writes per commit group, ingest: 13.28 MB, 0.02 MB/s#012Interval WAL: 3169 writes, 1178 syncs, 2.69 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:49:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:42 np0005539482 nova_compute[254898]: 2025-11-29 05:49:42.978 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:43 np0005539482 nova_compute[254898]: 2025-11-29 05:49:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:45 np0005539482 podman[285979]: 2025-11-29 05:49:45.073177847 +0000 UTC m=+0.114764078 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 00:49:45 np0005539482 nova_compute[254898]: 2025-11-29 05:49:45.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:46 np0005539482 nova_compute[254898]: 2025-11-29 05:49:46.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:46 np0005539482 nova_compute[254898]: 2025-11-29 05:49:46.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:46 np0005539482 nova_compute[254898]: 2025-11-29 05:49:46.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:49:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:49:47 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 9735 writes, 34K keys, 9735 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9735 writes, 2412 syncs, 4.04 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1751 writes, 3893 keys, 1751 commit groups, 1.0 writes per commit group, ingest: 1.60 MB, 0.00 MB/s#012Interval WAL: 1751 writes, 547 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:49:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:47 np0005539482 nova_compute[254898]: 2025-11-29 05:49:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:47 np0005539482 nova_compute[254898]: 2025-11-29 05:49:47.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:47 np0005539482 nova_compute[254898]: 2025-11-29 05:49:47.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:49:47 np0005539482 nova_compute[254898]: 2025-11-29 05:49:47.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:49:47 np0005539482 nova_compute[254898]: 2025-11-29 05:49:47.984 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:49:47 np0005539482 nova_compute[254898]: 2025-11-29 05:49:47.984 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:49:47 np0005539482 nova_compute[254898]: 2025-11-29 05:49:47.984 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:49:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:49:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149476587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.456 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.620 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.621 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.621 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.622 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.691 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.691 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:49:48 np0005539482 nova_compute[254898]: 2025-11-29 05:49:48.712 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:49:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:49:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226711259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:49:49 np0005539482 nova_compute[254898]: 2025-11-29 05:49:49.092 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:49:49 np0005539482 nova_compute[254898]: 2025-11-29 05:49:49.098 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:49:49 np0005539482 nova_compute[254898]: 2025-11-29 05:49:49.116 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:49:49 np0005539482 nova_compute[254898]: 2025-11-29 05:49:49.119 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:49:49 np0005539482 nova_compute[254898]: 2025-11-29 05:49:49.119 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:49:50 np0005539482 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 00:49:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:49:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:49:52 np0005539482 nova_compute[254898]: 2025-11-29 05:49:52.120 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:52 np0005539482 nova_compute[254898]: 2025-11-29 05:49:52.120 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:49:52 np0005539482 nova_compute[254898]: 2025-11-29 05:49:52.121 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:49:52 np0005539482 nova_compute[254898]: 2025-11-29 05:49:52.134 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:49:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:52 np0005539482 nova_compute[254898]: 2025-11-29 05:49:52.963 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:49:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:55 np0005539482 podman[286053]: 2025-11-29 05:49:55.994927007 +0000 UTC m=+0.046185030 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 00:49:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:49:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:49:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:50:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:50:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:50:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:50:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:50:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:50:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:12 np0005539482 podman[286077]: 2025-11-29 05:50:12.999017478 +0000 UTC m=+0.050014142 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:50:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:50:13.764 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:50:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:50:13.765 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:50:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:50:13.765 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:50:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:50:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528576601' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:50:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:50:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528576601' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:50:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:16 np0005539482 podman[286098]: 2025-11-29 05:50:16.020566528 +0000 UTC m=+0.079819858 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:50:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:26 np0005539482 podman[286225]: 2025-11-29 05:50:26.119122831 +0000 UTC m=+0.072794680 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:26 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d4be9e86-cf92-480e-970d-fcb04b55df85 does not exist
Nov 29 00:50:26 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 22ec78fc-1a0d-4012-9996-64993acdc1b7 does not exist
Nov 29 00:50:26 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev abba0dab-be50-4429-9961-314a4223ae0e does not exist
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:50:26 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:50:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:50:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:27 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:50:27 np0005539482 podman[286535]: 2025-11-29 05:50:27.65545187 +0000 UTC m=+0.039067990 container create 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 00:50:27 np0005539482 systemd[1]: Started libpod-conmon-782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a.scope.
Nov 29 00:50:27 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:50:27 np0005539482 podman[286535]: 2025-11-29 05:50:27.722371058 +0000 UTC m=+0.105987198 container init 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 00:50:27 np0005539482 podman[286535]: 2025-11-29 05:50:27.72954017 +0000 UTC m=+0.113156300 container start 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:50:27 np0005539482 podman[286535]: 2025-11-29 05:50:27.73247273 +0000 UTC m=+0.116088860 container attach 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 00:50:27 np0005539482 podman[286535]: 2025-11-29 05:50:27.638997654 +0000 UTC m=+0.022613794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:50:27 np0005539482 mystifying_bassi[286552]: 167 167
Nov 29 00:50:27 np0005539482 systemd[1]: libpod-782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a.scope: Deactivated successfully.
Nov 29 00:50:27 np0005539482 podman[286535]: 2025-11-29 05:50:27.738253049 +0000 UTC m=+0.121869179 container died 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:50:27 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f661d02cbc62cd57990905657dc82820f425628d7896a5f6d3d6203c7cee0947-merged.mount: Deactivated successfully.
Nov 29 00:50:27 np0005539482 podman[286535]: 2025-11-29 05:50:27.76949045 +0000 UTC m=+0.153106580 container remove 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:50:27 np0005539482 systemd[1]: libpod-conmon-782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a.scope: Deactivated successfully.
Nov 29 00:50:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:27 np0005539482 podman[286576]: 2025-11-29 05:50:27.942891615 +0000 UTC m=+0.046467687 container create 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:50:27 np0005539482 systemd[1]: Started libpod-conmon-2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941.scope.
Nov 29 00:50:28 np0005539482 podman[286576]: 2025-11-29 05:50:27.920704633 +0000 UTC m=+0.024280755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:50:28 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:50:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:28 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:28 np0005539482 podman[286576]: 2025-11-29 05:50:28.038889902 +0000 UTC m=+0.142465984 container init 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:50:28 np0005539482 podman[286576]: 2025-11-29 05:50:28.045979262 +0000 UTC m=+0.149555364 container start 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 00:50:28 np0005539482 podman[286576]: 2025-11-29 05:50:28.049769173 +0000 UTC m=+0.153345255 container attach 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:50:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:29 np0005539482 peaceful_noether[286593]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:50:29 np0005539482 peaceful_noether[286593]: --> relative data size: 1.0
Nov 29 00:50:29 np0005539482 peaceful_noether[286593]: --> All data devices are unavailable
Nov 29 00:50:29 np0005539482 systemd[1]: libpod-2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941.scope: Deactivated successfully.
Nov 29 00:50:29 np0005539482 podman[286576]: 2025-11-29 05:50:29.060356292 +0000 UTC m=+1.163932384 container died 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:50:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a-merged.mount: Deactivated successfully.
Nov 29 00:50:29 np0005539482 podman[286576]: 2025-11-29 05:50:29.107515035 +0000 UTC m=+1.211091107 container remove 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:50:29 np0005539482 systemd[1]: libpod-conmon-2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941.scope: Deactivated successfully.
Nov 29 00:50:29 np0005539482 podman[286775]: 2025-11-29 05:50:29.706577077 +0000 UTC m=+0.046679002 container create 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 00:50:29 np0005539482 systemd[1]: Started libpod-conmon-09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b.scope.
Nov 29 00:50:29 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:50:29 np0005539482 podman[286775]: 2025-11-29 05:50:29.68547307 +0000 UTC m=+0.025575025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:50:29 np0005539482 podman[286775]: 2025-11-29 05:50:29.784454398 +0000 UTC m=+0.124556343 container init 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:50:29 np0005539482 podman[286775]: 2025-11-29 05:50:29.791764443 +0000 UTC m=+0.131866358 container start 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 00:50:29 np0005539482 podman[286775]: 2025-11-29 05:50:29.794416227 +0000 UTC m=+0.134518182 container attach 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:50:29 np0005539482 pensive_varahamihira[286791]: 167 167
Nov 29 00:50:29 np0005539482 systemd[1]: libpod-09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b.scope: Deactivated successfully.
Nov 29 00:50:29 np0005539482 podman[286775]: 2025-11-29 05:50:29.799391916 +0000 UTC m=+0.139493851 container died 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:50:29 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7ba44baceee81abe1fc0bc9b9fcc269c91fd0e18217727532dfc6d9f963ee768-merged.mount: Deactivated successfully.
Nov 29 00:50:29 np0005539482 podman[286775]: 2025-11-29 05:50:29.83243054 +0000 UTC m=+0.172532465 container remove 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:50:29 np0005539482 systemd[1]: libpod-conmon-09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b.scope: Deactivated successfully.
Nov 29 00:50:30 np0005539482 podman[286813]: 2025-11-29 05:50:30.029118985 +0000 UTC m=+0.046756174 container create d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 00:50:30 np0005539482 systemd[1]: Started libpod-conmon-d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1.scope.
Nov 29 00:50:30 np0005539482 podman[286813]: 2025-11-29 05:50:30.007464505 +0000 UTC m=+0.025101794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:50:30 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:50:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:30 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:30 np0005539482 podman[286813]: 2025-11-29 05:50:30.14255002 +0000 UTC m=+0.160187229 container init d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:50:30 np0005539482 podman[286813]: 2025-11-29 05:50:30.148396752 +0000 UTC m=+0.166033951 container start d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:50:30 np0005539482 podman[286813]: 2025-11-29 05:50:30.152175982 +0000 UTC m=+0.169813171 container attach d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 00:50:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]: {
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:    "0": [
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:        {
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "devices": [
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "/dev/loop3"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            ],
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_name": "ceph_lv0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_size": "21470642176",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "name": "ceph_lv0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "tags": {
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cluster_name": "ceph",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.crush_device_class": "",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.encrypted": "0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osd_id": "0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.type": "block",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.vdo": "0"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            },
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "type": "block",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "vg_name": "ceph_vg0"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:        }
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:    ],
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:    "1": [
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:        {
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "devices": [
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "/dev/loop4"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            ],
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_name": "ceph_lv1",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_size": "21470642176",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "name": "ceph_lv1",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "tags": {
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cluster_name": "ceph",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.crush_device_class": "",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.encrypted": "0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osd_id": "1",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.type": "block",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.vdo": "0"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            },
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "type": "block",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "vg_name": "ceph_vg1"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:        }
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:    ],
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:    "2": [
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:        {
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "devices": [
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "/dev/loop5"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            ],
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_name": "ceph_lv2",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_size": "21470642176",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "name": "ceph_lv2",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "tags": {
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.cluster_name": "ceph",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.crush_device_class": "",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.encrypted": "0",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osd_id": "2",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.type": "block",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:                "ceph.vdo": "0"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            },
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "type": "block",
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:            "vg_name": "ceph_vg2"
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:        }
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]:    ]
Nov 29 00:50:30 np0005539482 stupefied_wozniak[286830]: }
Nov 29 00:50:30 np0005539482 systemd[1]: libpod-d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1.scope: Deactivated successfully.
Nov 29 00:50:30 np0005539482 podman[286813]: 2025-11-29 05:50:30.926842893 +0000 UTC m=+0.944480092 container died d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:50:30 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6-merged.mount: Deactivated successfully.
Nov 29 00:50:30 np0005539482 podman[286813]: 2025-11-29 05:50:30.989931739 +0000 UTC m=+1.007568928 container remove d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:50:30 np0005539482 systemd[1]: libpod-conmon-d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1.scope: Deactivated successfully.
Nov 29 00:50:31 np0005539482 podman[286994]: 2025-11-29 05:50:31.662087957 +0000 UTC m=+0.038673871 container create cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:50:31 np0005539482 systemd[1]: Started libpod-conmon-cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7.scope.
Nov 29 00:50:31 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:50:31 np0005539482 podman[286994]: 2025-11-29 05:50:31.643851919 +0000 UTC m=+0.020437853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:50:31 np0005539482 podman[286994]: 2025-11-29 05:50:31.741165766 +0000 UTC m=+0.117751690 container init cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 00:50:31 np0005539482 podman[286994]: 2025-11-29 05:50:31.746962185 +0000 UTC m=+0.123548089 container start cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:50:31 np0005539482 nervous_moore[287010]: 167 167
Nov 29 00:50:31 np0005539482 podman[286994]: 2025-11-29 05:50:31.751942715 +0000 UTC m=+0.128528629 container attach cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 00:50:31 np0005539482 systemd[1]: libpod-cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7.scope: Deactivated successfully.
Nov 29 00:50:31 np0005539482 podman[286994]: 2025-11-29 05:50:31.753256707 +0000 UTC m=+0.129842611 container died cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:50:31 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f0fc78293ae7c5265f0248ddedb17ed1b351872d7e065a51e80bf2b5fffa82ab-merged.mount: Deactivated successfully.
Nov 29 00:50:31 np0005539482 podman[286994]: 2025-11-29 05:50:31.788500634 +0000 UTC m=+0.165086548 container remove cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 00:50:31 np0005539482 systemd[1]: libpod-conmon-cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7.scope: Deactivated successfully.
Nov 29 00:50:31 np0005539482 podman[287034]: 2025-11-29 05:50:31.940416194 +0000 UTC m=+0.036296613 container create cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:50:31 np0005539482 systemd[1]: Started libpod-conmon-cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325.scope.
Nov 29 00:50:32 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:50:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:32 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:50:32 np0005539482 podman[287034]: 2025-11-29 05:50:31.925867204 +0000 UTC m=+0.021747633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:50:32 np0005539482 podman[287034]: 2025-11-29 05:50:32.022948996 +0000 UTC m=+0.118829455 container init cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 00:50:32 np0005539482 podman[287034]: 2025-11-29 05:50:32.032010024 +0000 UTC m=+0.127890423 container start cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:50:32 np0005539482 podman[287034]: 2025-11-29 05:50:32.038681664 +0000 UTC m=+0.134562073 container attach cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 00:50:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]: {
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "osd_id": 0,
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "type": "bluestore"
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:    },
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "osd_id": 1,
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "type": "bluestore"
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:    },
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "osd_id": 2,
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:        "type": "bluestore"
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]:    }
Nov 29 00:50:32 np0005539482 gracious_aryabhata[287051]: }
Nov 29 00:50:32 np0005539482 systemd[1]: libpod-cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325.scope: Deactivated successfully.
Nov 29 00:50:32 np0005539482 podman[287034]: 2025-11-29 05:50:32.992845687 +0000 UTC m=+1.088726106 container died cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:50:33 np0005539482 systemd[1]: var-lib-containers-storage-overlay-34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361-merged.mount: Deactivated successfully.
Nov 29 00:50:33 np0005539482 podman[287034]: 2025-11-29 05:50:33.051712271 +0000 UTC m=+1.147592680 container remove cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:50:33 np0005539482 systemd[1]: libpod-conmon-cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325.scope: Deactivated successfully.
Nov 29 00:50:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:50:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:33 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:50:33 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:33 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev ad2956fe-baef-4782-89ed-a089b5d0114e does not exist
Nov 29 00:50:33 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev e3fe0e5c-21e6-4dfa-9a19-f35fffebac75 does not exist
Nov 29 00:50:34 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:34 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:50:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:50:41
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:50:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:50:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:50:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:50:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:43 np0005539482 nova_compute[254898]: 2025-11-29 05:50:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:44 np0005539482 podman[287148]: 2025-11-29 05:50:44.043159733 +0000 UTC m=+0.093552298 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 00:50:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:50:44 np0005539482 nova_compute[254898]: 2025-11-29 05:50:44.950 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:45 np0005539482 nova_compute[254898]: 2025-11-29 05:50:45.073 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:50:46 np0005539482 nova_compute[254898]: 2025-11-29 05:50:46.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:47 np0005539482 podman[287166]: 2025-11-29 05:50:47.022189532 +0000 UTC m=+0.079633524 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 00:50:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:47 np0005539482 nova_compute[254898]: 2025-11-29 05:50:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:47 np0005539482 nova_compute[254898]: 2025-11-29 05:50:47.981 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:50:47 np0005539482 nova_compute[254898]: 2025-11-29 05:50:47.982 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:50:47 np0005539482 nova_compute[254898]: 2025-11-29 05:50:47.982 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:50:47 np0005539482 nova_compute[254898]: 2025-11-29 05:50:47.983 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:50:47 np0005539482 nova_compute[254898]: 2025-11-29 05:50:47.983 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:50:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:50:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:50:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325678523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.409 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.563 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.564 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4944MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.564 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.564 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.632 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.633 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:50:48 np0005539482 nova_compute[254898]: 2025-11-29 05:50:48.650 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:50:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:50:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1593395698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:50:49 np0005539482 nova_compute[254898]: 2025-11-29 05:50:49.049 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:50:49 np0005539482 nova_compute[254898]: 2025-11-29 05:50:49.056 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:50:49 np0005539482 nova_compute[254898]: 2025-11-29 05:50:49.074 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:50:49 np0005539482 nova_compute[254898]: 2025-11-29 05:50:49.078 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:50:49 np0005539482 nova_compute[254898]: 2025-11-29 05:50:49.078 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:50:50 np0005539482 nova_compute[254898]: 2025-11-29 05:50:50.080 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:50 np0005539482 nova_compute[254898]: 2025-11-29 05:50:50.081 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:50 np0005539482 nova_compute[254898]: 2025-11-29 05:50:50.081 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:50 np0005539482 nova_compute[254898]: 2025-11-29 05:50:50.081 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:50:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:50:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:50:51 np0005539482 nova_compute[254898]: 2025-11-29 05:50:51.955 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:51 np0005539482 nova_compute[254898]: 2025-11-29 05:50:51.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:50:51 np0005539482 nova_compute[254898]: 2025-11-29 05:50:51.956 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:50:51 np0005539482 nova_compute[254898]: 2025-11-29 05:50:51.978 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:50:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:50:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:53 np0005539482 nova_compute[254898]: 2025-11-29 05:50:53.971 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:50:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:50:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 00:50:56 np0005539482 podman[287236]: 2025-11-29 05:50:56.997985355 +0000 UTC m=+0.053993188 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 00:50:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:50:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.888684) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467888788, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1224, "num_deletes": 251, "total_data_size": 1871730, "memory_usage": 1902512, "flush_reason": "Manual Compaction"}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467900851, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1109578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31037, "largest_seqno": 32260, "table_properties": {"data_size": 1105092, "index_size": 1946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11708, "raw_average_key_size": 20, "raw_value_size": 1095392, "raw_average_value_size": 1931, "num_data_blocks": 89, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395346, "oldest_key_time": 1764395346, "file_creation_time": 1764395467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 12244 microseconds, and 8003 cpu microseconds.
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.900934) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1109578 bytes OK
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.900965) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.902845) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.902870) EVENT_LOG_v1 {"time_micros": 1764395467902862, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.902894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1866175, prev total WAL file size 1866175, number of live WAL files 2.
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.904101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1083KB)], [65(10166KB)]
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467904153, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11519753, "oldest_snapshot_seqno": -1}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6226 keys, 8929158 bytes, temperature: kUnknown
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467960387, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8929158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8888290, "index_size": 24182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 156584, "raw_average_key_size": 25, "raw_value_size": 8777459, "raw_average_value_size": 1409, "num_data_blocks": 990, "num_entries": 6226, "num_filter_entries": 6226, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.960595) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8929158 bytes
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.961787) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.6 rd, 158.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(18.4) write-amplify(8.0) OK, records in: 6685, records dropped: 459 output_compression: NoCompression
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.961801) EVENT_LOG_v1 {"time_micros": 1764395467961794, "job": 36, "event": "compaction_finished", "compaction_time_micros": 56302, "compaction_time_cpu_micros": 20519, "output_level": 6, "num_output_files": 1, "total_output_size": 8929158, "num_input_records": 6685, "num_output_records": 6226, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467962043, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467963467, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.904034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:51:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:51:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:51:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:51:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:51:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:51:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:51:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:51:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:51:13.766 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:51:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:51:13.766 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:51:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:51:13.766 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:51:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:51:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3386820668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:51:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:51:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3386820668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:51:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:15 np0005539482 podman[287260]: 2025-11-29 05:51:15.004144791 +0000 UTC m=+0.059316995 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:51:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:18 np0005539482 podman[287278]: 2025-11-29 05:51:18.066355338 +0000 UTC m=+0.115706840 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 00:51:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:28 np0005539482 podman[287304]: 2025-11-29 05:51:28.002128257 +0000 UTC m=+0.053388464 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:51:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:34 np0005539482 podman[287497]: 2025-11-29 05:51:34.016128699 +0000 UTC m=+0.077324248 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:51:34 np0005539482 podman[287497]: 2025-11-29 05:51:34.138663043 +0000 UTC m=+0.199858622 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 00:51:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:51:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:34 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:51:34 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:35 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6bb99168-2cb4-481a-bd80-ca589e21c9a2 does not exist
Nov 29 00:51:35 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 870a3744-574f-4cce-be67-1e096a301f14 does not exist
Nov 29 00:51:35 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a61a9331-df76-4314-8462-e4e9bbee2498 does not exist
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:35 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:51:36 np0005539482 podman[287927]: 2025-11-29 05:51:36.233056119 +0000 UTC m=+0.039841838 container create b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:51:36 np0005539482 systemd[1]: Started libpod-conmon-b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b.scope.
Nov 29 00:51:36 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:51:36 np0005539482 podman[287927]: 2025-11-29 05:51:36.215012816 +0000 UTC m=+0.021798585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:51:36 np0005539482 podman[287927]: 2025-11-29 05:51:36.311597766 +0000 UTC m=+0.118383515 container init b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 00:51:36 np0005539482 podman[287927]: 2025-11-29 05:51:36.316945815 +0000 UTC m=+0.123731534 container start b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 00:51:36 np0005539482 podman[287927]: 2025-11-29 05:51:36.320030578 +0000 UTC m=+0.126816347 container attach b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:51:36 np0005539482 eager_bhabha[287943]: 167 167
Nov 29 00:51:36 np0005539482 systemd[1]: libpod-b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b.scope: Deactivated successfully.
Nov 29 00:51:36 np0005539482 podman[287927]: 2025-11-29 05:51:36.322048548 +0000 UTC m=+0.128834287 container died b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 00:51:36 np0005539482 systemd[1]: var-lib-containers-storage-overlay-64a2d5be05568cb2c643080b0fdee7b9074c9390179112758e887be3505a170f-merged.mount: Deactivated successfully.
Nov 29 00:51:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:36 np0005539482 podman[287927]: 2025-11-29 05:51:36.362646983 +0000 UTC m=+0.169432692 container remove b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:51:36 np0005539482 systemd[1]: libpod-conmon-b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b.scope: Deactivated successfully.
Nov 29 00:51:36 np0005539482 podman[287969]: 2025-11-29 05:51:36.544974903 +0000 UTC m=+0.053302412 container create 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:51:36 np0005539482 systemd[1]: Started libpod-conmon-7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac.scope.
Nov 29 00:51:36 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:51:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:36 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:36 np0005539482 podman[287969]: 2025-11-29 05:51:36.620984799 +0000 UTC m=+0.129312358 container init 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:51:36 np0005539482 podman[287969]: 2025-11-29 05:51:36.52529577 +0000 UTC m=+0.033623279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:51:36 np0005539482 podman[287969]: 2025-11-29 05:51:36.628648823 +0000 UTC m=+0.136976342 container start 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 00:51:36 np0005539482 podman[287969]: 2025-11-29 05:51:36.632140657 +0000 UTC m=+0.140468176 container attach 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:51:37 np0005539482 vigorous_archimedes[287986]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:51:37 np0005539482 vigorous_archimedes[287986]: --> relative data size: 1.0
Nov 29 00:51:37 np0005539482 vigorous_archimedes[287986]: --> All data devices are unavailable
Nov 29 00:51:37 np0005539482 systemd[1]: libpod-7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac.scope: Deactivated successfully.
Nov 29 00:51:37 np0005539482 podman[287969]: 2025-11-29 05:51:37.628580646 +0000 UTC m=+1.136908195 container died 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 00:51:37 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6-merged.mount: Deactivated successfully.
Nov 29 00:51:37 np0005539482 podman[287969]: 2025-11-29 05:51:37.674418117 +0000 UTC m=+1.182745626 container remove 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:51:37 np0005539482 systemd[1]: libpod-conmon-7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac.scope: Deactivated successfully.
Nov 29 00:51:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:38 np0005539482 podman[288169]: 2025-11-29 05:51:38.213713103 +0000 UTC m=+0.034581802 container create d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 00:51:38 np0005539482 systemd[1]: Started libpod-conmon-d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c.scope.
Nov 29 00:51:38 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:51:38 np0005539482 podman[288169]: 2025-11-29 05:51:38.200149517 +0000 UTC m=+0.021018236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:51:38 np0005539482 podman[288169]: 2025-11-29 05:51:38.302288951 +0000 UTC m=+0.123157680 container init d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:51:38 np0005539482 podman[288169]: 2025-11-29 05:51:38.309958946 +0000 UTC m=+0.130827645 container start d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 00:51:38 np0005539482 podman[288169]: 2025-11-29 05:51:38.313504311 +0000 UTC m=+0.134373040 container attach d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:51:38 np0005539482 sharp_edison[288185]: 167 167
Nov 29 00:51:38 np0005539482 systemd[1]: libpod-d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c.scope: Deactivated successfully.
Nov 29 00:51:38 np0005539482 podman[288169]: 2025-11-29 05:51:38.314931725 +0000 UTC m=+0.135800424 container died d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:51:38 np0005539482 systemd[1]: var-lib-containers-storage-overlay-ced125e79e99ee2c43c8308b39ef910c5fb9a10050da171033bb02f074709c98-merged.mount: Deactivated successfully.
Nov 29 00:51:38 np0005539482 podman[288169]: 2025-11-29 05:51:38.348505741 +0000 UTC m=+0.169374440 container remove d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:51:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:38 np0005539482 systemd[1]: libpod-conmon-d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c.scope: Deactivated successfully.
Nov 29 00:51:38 np0005539482 podman[288210]: 2025-11-29 05:51:38.49660344 +0000 UTC m=+0.040228718 container create 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 00:51:38 np0005539482 systemd[1]: Started libpod-conmon-3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a.scope.
Nov 29 00:51:38 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:51:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:38 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:38 np0005539482 podman[288210]: 2025-11-29 05:51:38.568739292 +0000 UTC m=+0.112364580 container init 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 00:51:38 np0005539482 podman[288210]: 2025-11-29 05:51:38.478756511 +0000 UTC m=+0.022381839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:51:38 np0005539482 podman[288210]: 2025-11-29 05:51:38.5753085 +0000 UTC m=+0.118933778 container start 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:51:38 np0005539482 podman[288210]: 2025-11-29 05:51:38.578178329 +0000 UTC m=+0.121803617 container attach 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 00:51:39 np0005539482 charming_babbage[288226]: {
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:    "0": [
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:        {
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "devices": [
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "/dev/loop3"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            ],
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_name": "ceph_lv0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_size": "21470642176",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "name": "ceph_lv0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "tags": {
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cluster_name": "ceph",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.crush_device_class": "",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.encrypted": "0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osd_id": "0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.type": "block",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.vdo": "0"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            },
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "type": "block",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "vg_name": "ceph_vg0"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:        }
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:    ],
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:    "1": [
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:        {
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "devices": [
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "/dev/loop4"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            ],
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_name": "ceph_lv1",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_size": "21470642176",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "name": "ceph_lv1",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "tags": {
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cluster_name": "ceph",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.crush_device_class": "",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.encrypted": "0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osd_id": "1",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.type": "block",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.vdo": "0"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            },
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "type": "block",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "vg_name": "ceph_vg1"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:        }
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:    ],
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:    "2": [
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:        {
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "devices": [
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "/dev/loop5"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            ],
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_name": "ceph_lv2",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_size": "21470642176",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "name": "ceph_lv2",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "tags": {
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.cluster_name": "ceph",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.crush_device_class": "",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.encrypted": "0",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osd_id": "2",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.type": "block",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:                "ceph.vdo": "0"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            },
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "type": "block",
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:            "vg_name": "ceph_vg2"
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:        }
Nov 29 00:51:39 np0005539482 charming_babbage[288226]:    ]
Nov 29 00:51:39 np0005539482 charming_babbage[288226]: }
Nov 29 00:51:39 np0005539482 systemd[1]: libpod-3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a.scope: Deactivated successfully.
Nov 29 00:51:39 np0005539482 podman[288210]: 2025-11-29 05:51:39.317610943 +0000 UTC m=+0.861236221 container died 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:51:39 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b-merged.mount: Deactivated successfully.
Nov 29 00:51:39 np0005539482 podman[288210]: 2025-11-29 05:51:39.362736148 +0000 UTC m=+0.906361426 container remove 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:51:39 np0005539482 systemd[1]: libpod-conmon-3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a.scope: Deactivated successfully.
Nov 29 00:51:39 np0005539482 podman[288387]: 2025-11-29 05:51:39.906553913 +0000 UTC m=+0.034540631 container create 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:51:39 np0005539482 systemd[1]: Started libpod-conmon-442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc.scope.
Nov 29 00:51:39 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:51:39 np0005539482 podman[288387]: 2025-11-29 05:51:39.975854237 +0000 UTC m=+0.103840975 container init 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:51:39 np0005539482 podman[288387]: 2025-11-29 05:51:39.983101392 +0000 UTC m=+0.111088110 container start 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 00:51:39 np0005539482 podman[288387]: 2025-11-29 05:51:39.98637316 +0000 UTC m=+0.114359878 container attach 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:51:39 np0005539482 busy_mcnulty[288404]: 167 167
Nov 29 00:51:39 np0005539482 podman[288387]: 2025-11-29 05:51:39.892755191 +0000 UTC m=+0.020741929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:51:39 np0005539482 systemd[1]: libpod-442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc.scope: Deactivated successfully.
Nov 29 00:51:39 np0005539482 podman[288387]: 2025-11-29 05:51:39.988091981 +0000 UTC m=+0.116078699 container died 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:51:40 np0005539482 systemd[1]: var-lib-containers-storage-overlay-3f1629f967f91c5a7a2e86ad4915225d1620ed614859ed1ea6bf2e0dc559c860-merged.mount: Deactivated successfully.
Nov 29 00:51:40 np0005539482 podman[288387]: 2025-11-29 05:51:40.019396704 +0000 UTC m=+0.147383422 container remove 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 00:51:40 np0005539482 systemd[1]: libpod-conmon-442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc.scope: Deactivated successfully.
Nov 29 00:51:40 np0005539482 podman[288427]: 2025-11-29 05:51:40.155221357 +0000 UTC m=+0.034706135 container create 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:51:40 np0005539482 systemd[1]: Started libpod-conmon-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope.
Nov 29 00:51:40 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:51:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:40 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:51:40 np0005539482 podman[288427]: 2025-11-29 05:51:40.234207224 +0000 UTC m=+0.113692032 container init 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:51:40 np0005539482 podman[288427]: 2025-11-29 05:51:40.139459818 +0000 UTC m=+0.018944616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:51:40 np0005539482 podman[288427]: 2025-11-29 05:51:40.242808191 +0000 UTC m=+0.122292969 container start 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 00:51:40 np0005539482 podman[288427]: 2025-11-29 05:51:40.247338179 +0000 UTC m=+0.126822957 container attach 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:51:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]: {
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "osd_id": 0,
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "type": "bluestore"
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:    },
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "osd_id": 1,
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "type": "bluestore"
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:    },
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "osd_id": 2,
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:        "type": "bluestore"
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]:    }
Nov 29 00:51:41 np0005539482 sharp_sammet[288444]: }
Nov 29 00:51:41 np0005539482 systemd[1]: libpod-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope: Deactivated successfully.
Nov 29 00:51:41 np0005539482 podman[288427]: 2025-11-29 05:51:41.300503471 +0000 UTC m=+1.179988319 container died 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 00:51:41 np0005539482 systemd[1]: libpod-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope: Consumed 1.067s CPU time.
Nov 29 00:51:41 np0005539482 systemd[1]: var-lib-containers-storage-overlay-f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c-merged.mount: Deactivated successfully.
Nov 29 00:51:41 np0005539482 podman[288427]: 2025-11-29 05:51:41.365069293 +0000 UTC m=+1.244554081 container remove 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 00:51:41 np0005539482 systemd[1]: libpod-conmon-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope: Deactivated successfully.
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:51:41
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'images', 'vms', 'default.rgw.meta', '.mgr']
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:51:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:51:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:41 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:51:41 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev d94669fa-0adf-40a2-9b8b-af5d38734830 does not exist
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 7aa1b7b2-4002-4ff8-9417-15a95e86c32a does not exist
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:51:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:51:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:51:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:51:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:51:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:44 np0005539482 nova_compute[254898]: 2025-11-29 05:51:44.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:44 np0005539482 nova_compute[254898]: 2025-11-29 05:51:44.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:46 np0005539482 podman[288540]: 2025-11-29 05:51:46.004600695 +0000 UTC m=+0.057164375 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 00:51:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:48 np0005539482 nova_compute[254898]: 2025-11-29 05:51:48.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:49 np0005539482 podman[288561]: 2025-11-29 05:51:49.056820493 +0000 UTC m=+0.110897915 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:51:49 np0005539482 nova_compute[254898]: 2025-11-29 05:51:49.984 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:51:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:51:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542259907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.421 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.637 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.638 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4969MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.638 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.638 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.686 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.687 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:51:50 np0005539482 nova_compute[254898]: 2025-11-29 05:51:50.699 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:51:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:51:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330873549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:51:51 np0005539482 nova_compute[254898]: 2025-11-29 05:51:51.111 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:51:51 np0005539482 nova_compute[254898]: 2025-11-29 05:51:51.116 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:51:51 np0005539482 nova_compute[254898]: 2025-11-29 05:51:51.153 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:51:51 np0005539482 nova_compute[254898]: 2025-11-29 05:51:51.155 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:51:51 np0005539482 nova_compute[254898]: 2025-11-29 05:51:51.155 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:51:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:51:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:53 np0005539482 nova_compute[254898]: 2025-11-29 05:51:53.156 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:53 np0005539482 nova_compute[254898]: 2025-11-29 05:51:53.156 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:51:53 np0005539482 nova_compute[254898]: 2025-11-29 05:51:53.157 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:51:53 np0005539482 nova_compute[254898]: 2025-11-29 05:51:53.182 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:51:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:54 np0005539482 nova_compute[254898]: 2025-11-29 05:51:54.975 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:51:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:51:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:51:59 np0005539482 podman[288633]: 2025-11-29 05:51:59.031988328 +0000 UTC m=+0.078513619 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 00:52:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:52:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:52:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:52:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:52:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:52:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:52:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:52:13.767 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:52:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:52:13.767 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:52:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:52:13.767 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:52:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:52:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1248572375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:52:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:52:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1248572375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:52:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:17 np0005539482 podman[288655]: 2025-11-29 05:52:17.005960378 +0000 UTC m=+0.057842341 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:52:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:20 np0005539482 podman[288677]: 2025-11-29 05:52:20.02803629 +0000 UTC m=+0.084364248 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:52:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:30 np0005539482 podman[288705]: 2025-11-29 05:52:30.003502514 +0000 UTC m=+0.056489508 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 00:52:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:52:41
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'vms', '.mgr', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:52:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:52:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:52:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:52:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:52:42 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 396c092f-3c70-4bab-98c4-ca27d561a1ba does not exist
Nov 29 00:52:42 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 36d03458-56d3-4860-b0e7-d5caa5f9b2d1 does not exist
Nov 29 00:52:42 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 6cf257fc-7c2a-4535-b3b1-b87c2d05c62f does not exist
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:52:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:43 np0005539482 podman[288998]: 2025-11-29 05:52:43.175025402 +0000 UTC m=+0.062934207 container create c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 00:52:43 np0005539482 systemd[1]: Started libpod-conmon-c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98.scope.
Nov 29 00:52:43 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:52:43 np0005539482 podman[288998]: 2025-11-29 05:52:43.154221814 +0000 UTC m=+0.042130639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:52:43 np0005539482 podman[288998]: 2025-11-29 05:52:43.257448744 +0000 UTC m=+0.145357579 container init c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:52:43 np0005539482 podman[288998]: 2025-11-29 05:52:43.264561298 +0000 UTC m=+0.152470103 container start c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 00:52:43 np0005539482 podman[288998]: 2025-11-29 05:52:43.267722095 +0000 UTC m=+0.155630910 container attach c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:52:43 np0005539482 jolly_dhawan[289014]: 167 167
Nov 29 00:52:43 np0005539482 systemd[1]: libpod-c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98.scope: Deactivated successfully.
Nov 29 00:52:43 np0005539482 podman[288998]: 2025-11-29 05:52:43.273062035 +0000 UTC m=+0.160970880 container died c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 00:52:43 np0005539482 systemd[1]: var-lib-containers-storage-overlay-8133a71ab7ddbdedb5bf9586d17a26eba490d48fed745902f815b369c4ef9f17-merged.mount: Deactivated successfully.
Nov 29 00:52:43 np0005539482 podman[288998]: 2025-11-29 05:52:43.318407923 +0000 UTC m=+0.206316718 container remove c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 00:52:43 np0005539482 systemd[1]: libpod-conmon-c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98.scope: Deactivated successfully.
Nov 29 00:52:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:52:43 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:52:43 np0005539482 podman[289039]: 2025-11-29 05:52:43.548915038 +0000 UTC m=+0.069536377 container create 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:52:43 np0005539482 systemd[1]: Started libpod-conmon-9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2.scope.
Nov 29 00:52:43 np0005539482 podman[289039]: 2025-11-29 05:52:43.523261983 +0000 UTC m=+0.043883332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:52:43 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:52:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:43 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:43 np0005539482 podman[289039]: 2025-11-29 05:52:43.655412498 +0000 UTC m=+0.176033817 container init 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:52:43 np0005539482 podman[289039]: 2025-11-29 05:52:43.662365388 +0000 UTC m=+0.182986687 container start 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 00:52:43 np0005539482 podman[289039]: 2025-11-29 05:52:43.666294804 +0000 UTC m=+0.186916103 container attach 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:52:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:44 np0005539482 kind_mirzakhani[289056]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:52:44 np0005539482 kind_mirzakhani[289056]: --> relative data size: 1.0
Nov 29 00:52:44 np0005539482 kind_mirzakhani[289056]: --> All data devices are unavailable
Nov 29 00:52:44 np0005539482 systemd[1]: libpod-9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2.scope: Deactivated successfully.
Nov 29 00:52:44 np0005539482 podman[289085]: 2025-11-29 05:52:44.729430303 +0000 UTC m=+0.024189291 container died 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:52:44 np0005539482 systemd[1]: var-lib-containers-storage-overlay-b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48-merged.mount: Deactivated successfully.
Nov 29 00:52:44 np0005539482 podman[289085]: 2025-11-29 05:52:44.769082772 +0000 UTC m=+0.063841740 container remove 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 00:52:44 np0005539482 systemd[1]: libpod-conmon-9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2.scope: Deactivated successfully.
Nov 29 00:52:45 np0005539482 podman[289241]: 2025-11-29 05:52:45.298409941 +0000 UTC m=+0.039163386 container create 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 00:52:45 np0005539482 systemd[1]: Started libpod-conmon-046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad.scope.
Nov 29 00:52:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:52:45 np0005539482 podman[289241]: 2025-11-29 05:52:45.368848801 +0000 UTC m=+0.109602246 container init 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:52:45 np0005539482 podman[289241]: 2025-11-29 05:52:45.375083522 +0000 UTC m=+0.115836947 container start 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:52:45 np0005539482 podman[289241]: 2025-11-29 05:52:45.378039065 +0000 UTC m=+0.118792520 container attach 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 00:52:45 np0005539482 podman[289241]: 2025-11-29 05:52:45.283082927 +0000 UTC m=+0.023836382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:52:45 np0005539482 practical_faraday[289259]: 167 167
Nov 29 00:52:45 np0005539482 systemd[1]: libpod-046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad.scope: Deactivated successfully.
Nov 29 00:52:45 np0005539482 podman[289241]: 2025-11-29 05:52:45.380381582 +0000 UTC m=+0.121135047 container died 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:52:45 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6a986391aee65a823ce263a8f937ba8f34b5e8c9b7db025ca0d8920c22ac1ee6-merged.mount: Deactivated successfully.
Nov 29 00:52:45 np0005539482 podman[289241]: 2025-11-29 05:52:45.413417928 +0000 UTC m=+0.154171353 container remove 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 00:52:45 np0005539482 systemd[1]: libpod-conmon-046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad.scope: Deactivated successfully.
Nov 29 00:52:45 np0005539482 podman[289284]: 2025-11-29 05:52:45.579850971 +0000 UTC m=+0.035013136 container create 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:52:45 np0005539482 systemd[1]: Started libpod-conmon-9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3.scope.
Nov 29 00:52:45 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:52:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:45 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:45 np0005539482 podman[289284]: 2025-11-29 05:52:45.660010327 +0000 UTC m=+0.115172522 container init 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:52:45 np0005539482 podman[289284]: 2025-11-29 05:52:45.565242434 +0000 UTC m=+0.020404619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:52:45 np0005539482 podman[289284]: 2025-11-29 05:52:45.666853854 +0000 UTC m=+0.122016019 container start 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:52:45 np0005539482 podman[289284]: 2025-11-29 05:52:45.669737475 +0000 UTC m=+0.124899670 container attach 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 00:52:45 np0005539482 nova_compute[254898]: 2025-11-29 05:52:45.948 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:45 np0005539482 nova_compute[254898]: 2025-11-29 05:52:45.973 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]: {
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:    "0": [
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:        {
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "devices": [
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "/dev/loop3"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            ],
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_name": "ceph_lv0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_size": "21470642176",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "name": "ceph_lv0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "tags": {
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cluster_name": "ceph",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.crush_device_class": "",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.encrypted": "0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osd_id": "0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.type": "block",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.vdo": "0"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            },
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "type": "block",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "vg_name": "ceph_vg0"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:        }
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:    ],
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:    "1": [
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:        {
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "devices": [
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "/dev/loop4"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            ],
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_name": "ceph_lv1",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_size": "21470642176",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "name": "ceph_lv1",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "tags": {
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cluster_name": "ceph",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.crush_device_class": "",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.encrypted": "0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osd_id": "1",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.type": "block",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.vdo": "0"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            },
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "type": "block",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "vg_name": "ceph_vg1"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:        }
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:    ],
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:    "2": [
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:        {
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "devices": [
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "/dev/loop5"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            ],
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_name": "ceph_lv2",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_size": "21470642176",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "name": "ceph_lv2",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "tags": {
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.cluster_name": "ceph",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.crush_device_class": "",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.encrypted": "0",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osd_id": "2",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.type": "block",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:                "ceph.vdo": "0"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            },
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "type": "block",
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:            "vg_name": "ceph_vg2"
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:        }
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]:    ]
Nov 29 00:52:46 np0005539482 youthful_clarke[289300]: }
Nov 29 00:52:46 np0005539482 systemd[1]: libpod-9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3.scope: Deactivated successfully.
Nov 29 00:52:46 np0005539482 podman[289284]: 2025-11-29 05:52:46.382874681 +0000 UTC m=+0.838036846 container died 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 00:52:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:46 np0005539482 systemd[1]: var-lib-containers-storage-overlay-4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f-merged.mount: Deactivated successfully.
Nov 29 00:52:46 np0005539482 podman[289284]: 2025-11-29 05:52:46.429291464 +0000 UTC m=+0.884453629 container remove 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:52:46 np0005539482 systemd[1]: libpod-conmon-9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3.scope: Deactivated successfully.
Nov 29 00:52:46 np0005539482 nova_compute[254898]: 2025-11-29 05:52:46.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:47 np0005539482 podman[289459]: 2025-11-29 05:52:47.067375739 +0000 UTC m=+0.039151337 container create 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:52:47 np0005539482 systemd[1]: Started libpod-conmon-50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80.scope.
Nov 29 00:52:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:52:47 np0005539482 podman[289459]: 2025-11-29 05:52:47.137780067 +0000 UTC m=+0.109555685 container init 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 00:52:47 np0005539482 podman[289459]: 2025-11-29 05:52:47.048746384 +0000 UTC m=+0.020522002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:52:47 np0005539482 podman[289459]: 2025-11-29 05:52:47.146442199 +0000 UTC m=+0.118217797 container start 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 00:52:47 np0005539482 podman[289459]: 2025-11-29 05:52:47.149856102 +0000 UTC m=+0.121631700 container attach 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:52:47 np0005539482 youthful_albattani[289476]: 167 167
Nov 29 00:52:47 np0005539482 systemd[1]: libpod-50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80.scope: Deactivated successfully.
Nov 29 00:52:47 np0005539482 podman[289459]: 2025-11-29 05:52:47.15342623 +0000 UTC m=+0.125201828 container died 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:52:47 np0005539482 systemd[1]: var-lib-containers-storage-overlay-fe975b8de2252f003cc86233d436b9fffdb885ed52a9b85988e4b8089f3bfbb5-merged.mount: Deactivated successfully.
Nov 29 00:52:47 np0005539482 podman[289459]: 2025-11-29 05:52:47.192714179 +0000 UTC m=+0.164489777 container remove 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:52:47 np0005539482 systemd[1]: libpod-conmon-50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80.scope: Deactivated successfully.
Nov 29 00:52:47 np0005539482 podman[289473]: 2025-11-29 05:52:47.21903239 +0000 UTC m=+0.103824925 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 00:52:47 np0005539482 podman[289517]: 2025-11-29 05:52:47.361863117 +0000 UTC m=+0.039166927 container create 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:52:47 np0005539482 systemd[1]: Started libpod-conmon-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope.
Nov 29 00:52:47 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:52:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:47 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:52:47 np0005539482 podman[289517]: 2025-11-29 05:52:47.343181811 +0000 UTC m=+0.020485641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:52:47 np0005539482 podman[289517]: 2025-11-29 05:52:47.440527597 +0000 UTC m=+0.117831427 container init 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:52:47 np0005539482 podman[289517]: 2025-11-29 05:52:47.450316206 +0000 UTC m=+0.127620016 container start 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 00:52:47 np0005539482 podman[289517]: 2025-11-29 05:52:47.453511714 +0000 UTC m=+0.130815524 container attach 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 00:52:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:48 np0005539482 confident_leakey[289533]: {
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "osd_id": 0,
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "type": "bluestore"
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:    },
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "osd_id": 1,
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "type": "bluestore"
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:    },
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "osd_id": 2,
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:        "type": "bluestore"
Nov 29 00:52:48 np0005539482 confident_leakey[289533]:    }
Nov 29 00:52:48 np0005539482 confident_leakey[289533]: }
Nov 29 00:52:48 np0005539482 systemd[1]: libpod-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope: Deactivated successfully.
Nov 29 00:52:48 np0005539482 conmon[289533]: conmon 045c93423505625f7222 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope/container/memory.events
Nov 29 00:52:48 np0005539482 podman[289517]: 2025-11-29 05:52:48.325897618 +0000 UTC m=+1.003201428 container died 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:52:48 np0005539482 systemd[1]: var-lib-containers-storage-overlay-490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b-merged.mount: Deactivated successfully.
Nov 29 00:52:48 np0005539482 podman[289517]: 2025-11-29 05:52:48.375941719 +0000 UTC m=+1.053245529 container remove 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:52:48 np0005539482 systemd[1]: libpod-conmon-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope: Deactivated successfully.
Nov 29 00:52:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:52:48 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 98281bd0-4b5f-448d-840f-09ff284418c4 does not exist
Nov 29 00:52:48 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev a3df4935-a65e-4906-b0c7-76080c150636 does not exist
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.446471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568446508, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1074, "num_deletes": 251, "total_data_size": 1589011, "memory_usage": 1607464, "flush_reason": "Manual Compaction"}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568457117, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1552202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32261, "largest_seqno": 33334, "table_properties": {"data_size": 1546944, "index_size": 2718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11139, "raw_average_key_size": 19, "raw_value_size": 1536504, "raw_average_value_size": 2714, "num_data_blocks": 123, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395468, "oldest_key_time": 1764395468, "file_creation_time": 1764395568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10673 microseconds, and 3634 cpu microseconds.
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.457148) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1552202 bytes OK
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.457382) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.458642) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.458656) EVENT_LOG_v1 {"time_micros": 1764395568458651, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.458673) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1584002, prev total WAL file size 1586999, number of live WAL files 2.
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.459395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1515KB)], [68(8719KB)]
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568459424, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10481360, "oldest_snapshot_seqno": -1}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6278 keys, 8760606 bytes, temperature: kUnknown
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568504422, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8760606, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8719509, "index_size": 24283, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 158296, "raw_average_key_size": 25, "raw_value_size": 8607893, "raw_average_value_size": 1371, "num_data_blocks": 988, "num_entries": 6278, "num_filter_entries": 6278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.504727) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8760606 bytes
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.505776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.0 rd, 193.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.5 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(12.4) write-amplify(5.6) OK, records in: 6792, records dropped: 514 output_compression: NoCompression
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.505791) EVENT_LOG_v1 {"time_micros": 1764395568505783, "job": 38, "event": "compaction_finished", "compaction_time_micros": 45175, "compaction_time_cpu_micros": 20423, "output_level": 6, "num_output_files": 1, "total_output_size": 8760606, "num_input_records": 6792, "num_output_records": 6278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568506077, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568507499, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.459343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:52:48 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:52:48 np0005539482 nova_compute[254898]: 2025-11-29 05:52:48.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.971 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.971 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.972 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.972 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:52:49 np0005539482 nova_compute[254898]: 2025-11-29 05:52:49.972 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:52:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:50 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:52:50 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273346653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.423 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.585 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.586 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.587 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.587 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:52:50 np0005539482 podman[289652]: 2025-11-29 05:52:50.602967047 +0000 UTC m=+0.088541612 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.642 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.642 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:52:50 np0005539482 nova_compute[254898]: 2025-11-29 05:52:50.669 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:52:51 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:52:51 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205061130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:52:51 np0005539482 nova_compute[254898]: 2025-11-29 05:52:51.064 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:52:51 np0005539482 nova_compute[254898]: 2025-11-29 05:52:51.069 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:52:51 np0005539482 nova_compute[254898]: 2025-11-29 05:52:51.091 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:52:51 np0005539482 nova_compute[254898]: 2025-11-29 05:52:51.093 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:52:51 np0005539482 nova_compute[254898]: 2025-11-29 05:52:51.093 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:52:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:52:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:53 np0005539482 nova_compute[254898]: 2025-11-29 05:52:53.094 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:53 np0005539482 nova_compute[254898]: 2025-11-29 05:52:53.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:53 np0005539482 nova_compute[254898]: 2025-11-29 05:52:53.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:52:53 np0005539482 nova_compute[254898]: 2025-11-29 05:52:53.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:52:53 np0005539482 nova_compute[254898]: 2025-11-29 05:52:53.966 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:52:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:54 np0005539482 nova_compute[254898]: 2025-11-29 05:52:54.961 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:52:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:52:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:52:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:00 np0005539482 podman[289701]: 2025-11-29 05:53:00.98919349 +0000 UTC m=+0.046647619 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 00:53:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:53:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:53:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:53:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:53:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:53:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:53:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:53:13.768 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:53:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:53:13.769 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:53:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:53:13.769 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:53:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:53:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/198869822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:53:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:53:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/198869822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:53:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:16 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:17 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:18 np0005539482 podman[289726]: 2025-11-29 05:53:18.003996973 +0000 UTC m=+0.056897250 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 00:53:18 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:20 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:21 np0005539482 podman[289749]: 2025-11-29 05:53:21.016951034 +0000 UTC m=+0.073127855 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 00:53:22 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:22 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:24 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:26 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:27 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:28 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:30 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:31 np0005539482 podman[289776]: 2025-11-29 05:53:31.984903964 +0000 UTC m=+0.045383389 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 00:53:32 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:32 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:34 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:36 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:37 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:38 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:40 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:53:41
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'volumes']
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:53:41 np0005539482 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 00:53:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:53:42 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:53:42 np0005539482 ceph-mgr[75473]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1460327761
Nov 29 00:53:42 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:42 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:44 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:46 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:47 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:47 np0005539482 nova_compute[254898]: 2025-11-29 05:53:47.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:47 np0005539482 nova_compute[254898]: 2025-11-29 05:53:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:47 np0005539482 nova_compute[254898]: 2025-11-29 05:53:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:48 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:48 np0005539482 podman[289822]: 2025-11-29 05:53:48.735561871 +0000 UTC m=+0.083686113 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:53:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 473a5d92-cb91-4786-a7c6-8aa12943fe03 does not exist
Nov 29 00:53:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev db3bc9fb-71c3-4651-b50b-de80c26aaa08 does not exist
Nov 29 00:53:49 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 82bc9350-2034-4c04-a781-8fd76e6cf7d4 does not exist
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:53:49 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 00:53:49 np0005539482 podman[290089]: 2025-11-29 05:53:49.930319243 +0000 UTC m=+0.059938344 container create 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:53:49 np0005539482 nova_compute[254898]: 2025-11-29 05:53:49.967 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:49 np0005539482 systemd[1]: Started libpod-conmon-00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7.scope.
Nov 29 00:53:49 np0005539482 podman[290089]: 2025-11-29 05:53:49.897362339 +0000 UTC m=+0.026981450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:53:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:53:50 np0005539482 podman[290089]: 2025-11-29 05:53:50.039804666 +0000 UTC m=+0.169423767 container init 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 00:53:50 np0005539482 podman[290089]: 2025-11-29 05:53:50.050585969 +0000 UTC m=+0.180205040 container start 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:53:50 np0005539482 podman[290089]: 2025-11-29 05:53:50.054424412 +0000 UTC m=+0.184043493 container attach 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 00:53:50 np0005539482 thirsty_borg[290105]: 167 167
Nov 29 00:53:50 np0005539482 systemd[1]: libpod-00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7.scope: Deactivated successfully.
Nov 29 00:53:50 np0005539482 podman[290089]: 2025-11-29 05:53:50.060159552 +0000 UTC m=+0.189778623 container died 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 00:53:50 np0005539482 systemd[1]: var-lib-containers-storage-overlay-962c672f8cd1d8baf29e6e3ad347039333ca052dff3aead58233bbad138a298b-merged.mount: Deactivated successfully.
Nov 29 00:53:50 np0005539482 podman[290089]: 2025-11-29 05:53:50.103866279 +0000 UTC m=+0.233485350 container remove 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 00:53:50 np0005539482 systemd[1]: libpod-conmon-00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7.scope: Deactivated successfully.
Nov 29 00:53:50 np0005539482 podman[290130]: 2025-11-29 05:53:50.297519186 +0000 UTC m=+0.040035229 container create 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 00:53:50 np0005539482 systemd[1]: Started libpod-conmon-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope.
Nov 29 00:53:50 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:53:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:50 np0005539482 podman[290130]: 2025-11-29 05:53:50.282708554 +0000 UTC m=+0.025224617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:53:50 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:50 np0005539482 podman[290130]: 2025-11-29 05:53:50.386821316 +0000 UTC m=+0.129337369 container init 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:53:50 np0005539482 podman[290130]: 2025-11-29 05:53:50.398416599 +0000 UTC m=+0.140932632 container start 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:53:50 np0005539482 podman[290130]: 2025-11-29 05:53:50.40176039 +0000 UTC m=+0.144276433 container attach 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:53:50 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:50 np0005539482 nova_compute[254898]: 2025-11-29 05:53:50.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:50 np0005539482 nova_compute[254898]: 2025-11-29 05:53:50.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 00:53:51 np0005539482 confident_kowalevski[290147]: --> passed data devices: 0 physical, 3 LVM
Nov 29 00:53:51 np0005539482 confident_kowalevski[290147]: --> relative data size: 1.0
Nov 29 00:53:51 np0005539482 confident_kowalevski[290147]: --> All data devices are unavailable
Nov 29 00:53:51 np0005539482 systemd[1]: libpod-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope: Deactivated successfully.
Nov 29 00:53:51 np0005539482 podman[290130]: 2025-11-29 05:53:51.480801418 +0000 UTC m=+1.223317491 container died 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:53:51 np0005539482 systemd[1]: libpod-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope: Consumed 1.038s CPU time.
Nov 29 00:53:51 np0005539482 systemd[1]: var-lib-containers-storage-overlay-7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7-merged.mount: Deactivated successfully.
Nov 29 00:53:51 np0005539482 podman[290130]: 2025-11-29 05:53:51.547073035 +0000 UTC m=+1.289589088 container remove 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:53:51 np0005539482 systemd[1]: libpod-conmon-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope: Deactivated successfully.
Nov 29 00:53:51 np0005539482 podman[290177]: 2025-11-29 05:53:51.667491794 +0000 UTC m=+0.149534510 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 00:53:51 np0005539482 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.955 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.988 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.988 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.989 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 00:53:51 np0005539482 nova_compute[254898]: 2025-11-29 05:53:51.989 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:53:52 np0005539482 podman[290374]: 2025-11-29 05:53:52.230366263 +0000 UTC m=+0.059152034 container create b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 00:53:52 np0005539482 systemd[1]: Started libpod-conmon-b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87.scope.
Nov 29 00:53:52 np0005539482 podman[290374]: 2025-11-29 05:53:52.203366185 +0000 UTC m=+0.032152016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:53:52 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:53:52 np0005539482 podman[290374]: 2025-11-29 05:53:52.31092134 +0000 UTC m=+0.139707131 container init b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 00:53:52 np0005539482 podman[290374]: 2025-11-29 05:53:52.323961148 +0000 UTC m=+0.152746899 container start b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:53:52 np0005539482 podman[290374]: 2025-11-29 05:53:52.328000017 +0000 UTC m=+0.156785818 container attach b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 00:53:52 np0005539482 vigorous_maxwell[290390]: 167 167
Nov 29 00:53:52 np0005539482 systemd[1]: libpod-b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87.scope: Deactivated successfully.
Nov 29 00:53:52 np0005539482 podman[290374]: 2025-11-29 05:53:52.334511206 +0000 UTC m=+0.163296997 container died b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:53:52 np0005539482 systemd[1]: var-lib-containers-storage-overlay-a191d008ef9464b834bec9b4da66ad38c21b6bfca9f7b1508d270484a707bde9-merged.mount: Deactivated successfully.
Nov 29 00:53:52 np0005539482 podman[290374]: 2025-11-29 05:53:52.376729656 +0000 UTC m=+0.205515447 container remove b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:53:52 np0005539482 systemd[1]: libpod-conmon-b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87.scope: Deactivated successfully.
Nov 29 00:53:52 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:53:52 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272420030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:53:52 np0005539482 nova_compute[254898]: 2025-11-29 05:53:52.487 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:53:52 np0005539482 podman[290415]: 2025-11-29 05:53:52.573304774 +0000 UTC m=+0.059209816 container create 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 00:53:52 np0005539482 podman[290415]: 2025-11-29 05:53:52.549349409 +0000 UTC m=+0.035254461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:53:52 np0005539482 systemd[1]: Started libpod-conmon-248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262.scope.
Nov 29 00:53:52 np0005539482 nova_compute[254898]: 2025-11-29 05:53:52.693 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 00:53:52 np0005539482 nova_compute[254898]: 2025-11-29 05:53:52.698 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 00:53:52 np0005539482 nova_compute[254898]: 2025-11-29 05:53:52.698 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:53:52 np0005539482 nova_compute[254898]: 2025-11-29 05:53:52.699 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:53:52 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:53:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:52 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:52 np0005539482 podman[290415]: 2025-11-29 05:53:52.753633566 +0000 UTC m=+0.239538668 container init 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:53:52 np0005539482 podman[290415]: 2025-11-29 05:53:52.761944439 +0000 UTC m=+0.247849501 container start 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 00:53:52 np0005539482 podman[290415]: 2025-11-29 05:53:52.765575418 +0000 UTC m=+0.251480490 container attach 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 00:53:52 np0005539482 nova_compute[254898]: 2025-11-29 05:53:52.870 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 00:53:52 np0005539482 nova_compute[254898]: 2025-11-29 05:53:52.871 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 00:53:52 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.014 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.130 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.130 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.145 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.171 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.191 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 00:53:53 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 00:53:53 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174227071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]: {
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:    "0": [
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:        {
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "devices": [
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "/dev/loop3"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            ],
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_name": "ceph_lv0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_size": "21470642176",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "name": "ceph_lv0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "tags": {
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cluster_name": "ceph",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.crush_device_class": "",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.encrypted": "0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osd_id": "0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.type": "block",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.vdo": "0"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            },
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "type": "block",
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.603 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "vg_name": "ceph_vg0"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:        }
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:    ],
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:    "1": [
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:        {
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "devices": [
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "/dev/loop4"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            ],
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_name": "ceph_lv1",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_size": "21470642176",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "name": "ceph_lv1",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "tags": {
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cluster_name": "ceph",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.crush_device_class": "",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.encrypted": "0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osd_id": "1",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.type": "block",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.vdo": "0"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            },
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "type": "block",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "vg_name": "ceph_vg1"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:        }
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:    ],
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:    "2": [
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:        {
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "devices": [
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "/dev/loop5"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            ],
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_name": "ceph_lv2",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_size": "21470642176",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "name": "ceph_lv2",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "tags": {
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cephx_lockbox_secret": "",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.cluster_name": "ceph",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.crush_device_class": "",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.encrypted": "0",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osd_id": "2",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.type": "block",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:                "ceph.vdo": "0"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            },
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "type": "block",
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:            "vg_name": "ceph_vg2"
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:        }
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]:    ]
Nov 29 00:53:53 np0005539482 compassionate_shockley[290432]: }
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.612 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 00:53:53 np0005539482 systemd[1]: libpod-248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262.scope: Deactivated successfully.
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.662 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.664 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 00:53:53 np0005539482 nova_compute[254898]: 2025-11-29 05:53:53.664 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:53:53 np0005539482 podman[290463]: 2025-11-29 05:53:53.686535526 +0000 UTC m=+0.033093348 container died 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 00:53:53 np0005539482 systemd[1]: var-lib-containers-storage-overlay-6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b-merged.mount: Deactivated successfully.
Nov 29 00:53:53 np0005539482 podman[290463]: 2025-11-29 05:53:53.736526777 +0000 UTC m=+0.083084579 container remove 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:53:53 np0005539482 systemd[1]: libpod-conmon-248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262.scope: Deactivated successfully.
Nov 29 00:53:54 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:54 np0005539482 podman[290617]: 2025-11-29 05:53:54.943368574 +0000 UTC m=+0.043975584 container create 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 00:53:54 np0005539482 nova_compute[254898]: 2025-11-29 05:53:54.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:54 np0005539482 nova_compute[254898]: 2025-11-29 05:53:54.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 00:53:54 np0005539482 nova_compute[254898]: 2025-11-29 05:53:54.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 00:53:54 np0005539482 systemd[1]: Started libpod-conmon-9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78.scope.
Nov 29 00:53:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:53:55 np0005539482 podman[290617]: 2025-11-29 05:53:54.924565175 +0000 UTC m=+0.025172215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:53:55 np0005539482 podman[290617]: 2025-11-29 05:53:55.025598481 +0000 UTC m=+0.126205501 container init 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 00:53:55 np0005539482 podman[290617]: 2025-11-29 05:53:55.033492543 +0000 UTC m=+0.134099543 container start 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 00:53:55 np0005539482 podman[290617]: 2025-11-29 05:53:55.037580143 +0000 UTC m=+0.138187193 container attach 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 00:53:55 np0005539482 fervent_bassi[290634]: 167 167
Nov 29 00:53:55 np0005539482 systemd[1]: libpod-9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78.scope: Deactivated successfully.
Nov 29 00:53:55 np0005539482 podman[290617]: 2025-11-29 05:53:55.040427873 +0000 UTC m=+0.141034873 container died 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:53:55 np0005539482 systemd[1]: var-lib-containers-storage-overlay-c29fe19f0c5dfa91351a7186d772315877c3969455e3531c820c3f2fdb2bbced-merged.mount: Deactivated successfully.
Nov 29 00:53:55 np0005539482 podman[290617]: 2025-11-29 05:53:55.080123242 +0000 UTC m=+0.180730242 container remove 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:53:55 np0005539482 systemd[1]: libpod-conmon-9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78.scope: Deactivated successfully.
Nov 29 00:53:55 np0005539482 podman[290658]: 2025-11-29 05:53:55.258715351 +0000 UTC m=+0.042817466 container create 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 00:53:55 np0005539482 systemd[1]: Started libpod-conmon-8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0.scope.
Nov 29 00:53:55 np0005539482 systemd[1]: Started libcrun container.
Nov 29 00:53:55 np0005539482 podman[290658]: 2025-11-29 05:53:55.243331945 +0000 UTC m=+0.027434080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 00:53:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:55 np0005539482 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 00:53:55 np0005539482 podman[290658]: 2025-11-29 05:53:55.355880893 +0000 UTC m=+0.139983008 container init 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 00:53:55 np0005539482 podman[290658]: 2025-11-29 05:53:55.371366451 +0000 UTC m=+0.155468606 container start 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:53:55 np0005539482 podman[290658]: 2025-11-29 05:53:55.375656495 +0000 UTC m=+0.159758630 container attach 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]: {
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:    "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "osd_id": 0,
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "type": "bluestore"
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:    },
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:    "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "osd_id": 1,
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "type": "bluestore"
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:    },
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:    "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "osd_id": 2,
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:        "type": "bluestore"
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]:    }
Nov 29 00:53:56 np0005539482 pensive_clarke[290674]: }
Nov 29 00:53:56 np0005539482 systemd[1]: libpod-8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0.scope: Deactivated successfully.
Nov 29 00:53:56 np0005539482 podman[290658]: 2025-11-29 05:53:56.351874883 +0000 UTC m=+1.135977008 container died 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 00:53:56 np0005539482 systemd[1]: var-lib-containers-storage-overlay-d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee-merged.mount: Deactivated successfully.
Nov 29 00:53:56 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:56 np0005539482 podman[290658]: 2025-11-29 05:53:56.418157281 +0000 UTC m=+1.202259396 container remove 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 00:53:56 np0005539482 systemd[1]: libpod-conmon-8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0.scope: Deactivated successfully.
Nov 29 00:53:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 00:53:56 np0005539482 systemd-logind[793]: New session 54 of user zuul.
Nov 29 00:53:56 np0005539482 systemd[1]: Started Session 54 of User zuul.
Nov 29 00:53:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:53:56 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 00:53:56 np0005539482 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:53:56 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 4b37723a-64e0-43a6-a6a3-947e3866b4a9 does not exist
Nov 29 00:53:56 np0005539482 ceph-mgr[75473]: [progress WARNING root] complete: ev 5c224873-7571-4f16-9922-29f768da0712 does not exist
Nov 29 00:53:56 np0005539482 nova_compute[254898]: 2025-11-29 05:53:56.763 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 00:53:56 np0005539482 nova_compute[254898]: 2025-11-29 05:53:56.765 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:56 np0005539482 nova_compute[254898]: 2025-11-29 05:53:56.765 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 00:53:57 np0005539482 nova_compute[254898]: 2025-11-29 05:53:57.096 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 00:53:57 np0005539482 nova_compute[254898]: 2025-11-29 05:53:57.096 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:57 np0005539482 nova_compute[254898]: 2025-11-29 05:53:57.097 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 00:53:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:53:57 np0005539482 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 00:53:57 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:53:58 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:53:59 np0005539482 nova_compute[254898]: 2025-11-29 05:53:59.159 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 00:53:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14825 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:53:59 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14827 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:00 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 00:54:00 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015181172' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 00:54:00 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:02 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:02 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:54:02 np0005539482 ovs-vsctl[291060]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 00:54:03 np0005539482 podman[291057]: 2025-11-29 05:54:03.008136852 +0000 UTC m=+0.061081222 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 00:54:03 np0005539482 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 00:54:03 np0005539482 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 00:54:03 np0005539482 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 00:54:04 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:04 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: cache status {prefix=cache status} (starting...)
Nov 29 00:54:04 np0005539482 lvm[291412]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 00:54:04 np0005539482 lvm[291412]: VG ceph_vg1 finished
Nov 29 00:54:04 np0005539482 lvm[291414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 00:54:04 np0005539482 lvm[291414]: VG ceph_vg0 finished
Nov 29 00:54:04 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: client ls {prefix=client ls} (starting...)
Nov 29 00:54:04 np0005539482 lvm[291450]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 00:54:04 np0005539482 lvm[291450]: VG ceph_vg2 finished
Nov 29 00:54:04 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14831 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:05 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 00:54:05 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 00:54:05 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14833 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:05 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 00:54:05 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 00:54:05 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 00:54:05 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 00:54:05 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786193635' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 00:54:05 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 00:54:06 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14839 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:06 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:54:06.086+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:54:06 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:54:06 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415974482' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 00:54:06 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 00:54:06 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940364020' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635041196' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 00:54:06 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: ops {prefix=ops} (starting...)
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622458214' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 00:54:06 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995625949' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 00:54:07 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session ls {prefix=session ls} (starting...)
Nov 29 00:54:07 np0005539482 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: status {prefix=status} (starting...)
Nov 29 00:54:07 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14853 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2293629981' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 00:54:07 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14855 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525284977' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.918572) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647918625, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 910, "num_deletes": 255, "total_data_size": 1176215, "memory_usage": 1194128, "flush_reason": "Manual Compaction"}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647927952, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1164650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33335, "largest_seqno": 34244, "table_properties": {"data_size": 1160132, "index_size": 2106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10000, "raw_average_key_size": 19, "raw_value_size": 1150960, "raw_average_value_size": 2221, "num_data_blocks": 94, "num_entries": 518, "num_filter_entries": 518, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395568, "oldest_key_time": 1764395568, "file_creation_time": 1764395647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9452 microseconds, and 4387 cpu microseconds.
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.928021) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1164650 bytes OK
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.928051) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929305) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929321) EVENT_LOG_v1 {"time_micros": 1764395647929316, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929346) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1171750, prev total WAL file size 1171750, number of live WAL files 2.
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929853) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323533' seq:0, type:0; will stop at (end)
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1137KB)], [71(8555KB)]
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647929952, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9925256, "oldest_snapshot_seqno": -1}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6274 keys, 9634245 bytes, temperature: kUnknown
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647989292, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9634245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9592327, "index_size": 25104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 159186, "raw_average_key_size": 25, "raw_value_size": 9479874, "raw_average_value_size": 1510, "num_data_blocks": 1018, "num_entries": 6274, "num_filter_entries": 6274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.989509) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9634245 bytes
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.990332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.1 rd, 162.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(16.8) write-amplify(8.3) OK, records in: 6796, records dropped: 522 output_compression: NoCompression
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.990347) EVENT_LOG_v1 {"time_micros": 1764395647990340, "job": 40, "event": "compaction_finished", "compaction_time_micros": 59406, "compaction_time_cpu_micros": 23942, "output_level": 6, "num_output_files": 1, "total_output_size": 9634245, "num_input_records": 6796, "num_output_records": 6274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647990600, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647991810, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:54:07 np0005539482 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701728340' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1067124720' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 00:54:08 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424644693' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181782022' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 00:54:08 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14869 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:08 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:54:08.891+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 00:54:08 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 00:54:08 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137024356' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 00:54:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 00:54:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266566497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 00:54:09 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:09 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 00:54:09 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197600433' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 00:54:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 00:54:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255992290' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 00:54:10 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14883 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:10 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 00:54:10 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012891697' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557761d1dc00
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 335872 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 417792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5631 writes, 23K keys, 5631 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5631 writes, 860 syncs, 6.55 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.950073242s of 600.213012695s, submitted: 90
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14887 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 200.325714111s of 200.562088013s, submitted: 90
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 17432576 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916268 data_alloc: 218103808 data_used: 180224
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 123 ms_handle_reset con 0x557763f08000 session 0x5577631b30e0
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 17440768 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17481728 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 124 ms_handle_reset con 0x557765b97c00 session 0x557765010000
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe39000/0x0/0x4ffc00000, data 0xd2e970/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 17408000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925293 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe38000/0x0/0x4ffc00000, data 0xd2e993/0xde4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.306422234s of 10.512654305s, submitted: 45
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.103341103s of 12.113625526s, submitted: 13
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 10
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 11
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.626058578s of 10.632491112s, submitted: 2
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.001748085s of 12.013872147s, submitted: 4
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930625 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.040862083s of 12.053675652s, submitted: 4
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928167 data_alloc: 218103808 data_used: 184320
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.859819412s of 16.871786118s, submitted: 28
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 12
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 17235968 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939171 data_alloc: 218103808 data_used: 200704
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe2e000/0x0/0x4ffc00000, data 0xd33b54/0xdee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.211776733s of 18.236698151s, submitted: 18
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbe29000/0x0/0x4ffc00000, data 0xd372c6/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe25000/0x0/0x4ffc00000, data 0xd38edc/0xdf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951491 data_alloc: 218103808 data_used: 208896
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe23000/0x0/0x4ffc00000, data 0xd3aaf2/0xdfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe1f000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955423 data_alloc: 218103808 data_used: 212992
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.339121819s of 10.671369553s, submitted: 123
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 17047552 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe19000/0x0/0x4ffc00000, data 0xd3fd71/0xe03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961321 data_alloc: 218103808 data_used: 221184
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe1c000/0x0/0x4ffc00000, data 0xd3fcd6/0xe02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959079 data_alloc: 218103808 data_used: 221184
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.049346924s of 10.169968605s, submitted: 40
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965021 data_alloc: 218103808 data_used: 229376
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 15917056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964141 data_alloc: 218103808 data_used: 229376
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991994858s of 11.068979263s, submitted: 40
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0xd433da/0xe09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968139 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.993079185s of 12.021212578s, submitted: 14
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.520608902s of 10.532555580s, submitted: 3
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 15802368 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.498485565s of 13.504686356s, submitted: 2
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 ms_handle_reset con 0x557765b96800 session 0x557764f4fe00
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 13
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971135 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 14983168 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974495 data_alloc: 218103808 data_used: 237568
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.806947708s of 11.988073349s, submitted: 235
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980069 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0b000/0x0/0x4ffc00000, data 0xd4858e/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.121107101s of 24.133726120s, submitted: 13
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a08c/0xe15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984139 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a127/0xe16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986619 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986571 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.074976921s of 12.103597641s, submitted: 7
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a157/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988043 data_alloc: 218103808 data_used: 245760
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a185/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 140 handle_osd_map epochs [141,142], i have 140, src has [1,142]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993155 data_alloc: 218103808 data_used: 253952
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe01000/0x0/0x4ffc00000, data 0xd4d8db/0xe1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.097406387s of 11.276707649s, submitted: 61
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997329 data_alloc: 218103808 data_used: 262144
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f3f4/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998041 data_alloc: 218103808 data_used: 262144
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.991775513s of 10.043452263s, submitted: 26
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 13819904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000645 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e25/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002413 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f7f/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.350621223s of 11.425502777s, submitted: 31
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf5000/0x0/0x4ffc00000, data 0xd51047/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007621 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 12541952 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf2000/0x0/0x4ffc00000, data 0xd511a7/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011157 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 12509184 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 12460032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd5117b/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.751610756s of 11.044014931s, submitted: 37
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 12419072 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010409 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf4000/0x0/0x4ffc00000, data 0xd510b1/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 12386304 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50fe8/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010199 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf6000/0x0/0x4ffc00000, data 0xd50fb7/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006959 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.834184647s of 10.926655769s, submitted: 30
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbd/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008855 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50e84/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010495 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022552490s of 10.158326149s, submitted: 18
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008551 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 12271616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.966061592s of 12.095813751s, submitted: 15
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbc/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009501 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e51/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.408202171s of 10.675523758s, submitted: 17
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011349 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f47/0xe24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd526bd/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018011 data_alloc: 218103808 data_used: 278528
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0xd5a18b/0xe2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.418998718s of 10.583705902s, submitted: 59
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdc6000/0x0/0x4ffc00000, data 0xd839d6/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024267 data_alloc: 218103808 data_used: 278528
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdb9000/0x0/0x4ffc00000, data 0xd9187a/0xe64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [1])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 7217152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 6660096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 6668288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fabc9000/0x0/0x4ffc00000, data 0xde24b0/0xeb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 6643712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028237 data_alloc: 218103808 data_used: 278528
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 6725632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 6709248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fab98000/0x0/0x4ffc00000, data 0xe11be3/0xee5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5660672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 5603328 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.068192482s of 10.000307083s, submitted: 80
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab94000/0x0/0x4ffc00000, data 0xe13646/0xee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037963 data_alloc: 218103808 data_used: 286720
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab47000/0x0/0x4ffc00000, data 0xe61e3d/0xf36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4784128 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4300800 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 4071424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaea000/0x0/0x4ffc00000, data 0xebf2b3/0xf94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043289 data_alloc: 218103808 data_used: 294912
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3252224 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faab2000/0x0/0x4ffc00000, data 0xef3a69/0xfca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2957312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2662400 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xf07fb6/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.752583504s of 10.000064850s, submitted: 81
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 1425408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048977 data_alloc: 218103808 data_used: 294912
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1417216 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faa4a000/0x0/0x4ffc00000, data 0xf5e02d/0x1034000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86704128 unmapped: 2375680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 2940928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067411 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 1630208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.663159370s of 10.000439644s, submitted: 117
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2211840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080451 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 3194880 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4ea000/0x0/0x4ffc00000, data 0x10a58fb/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 3219456 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 14
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10afbb3/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 1466368 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093257 data_alloc: 218103808 data_used: 307200
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 1835008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x1157f2b/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1818624 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89808896 unmapped: 1368064 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7984 writes, 30K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 7984 writes, 1865 syncs, 4.28 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2353 writes, 6787 keys, 2353 commit groups, 1.0 writes per commit group, ingest: 7.64 MB, 0.01 MB/s#012Interval WAL: 2353 writes, 1005 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.250799179s of 10.555690765s, submitted: 96
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157dbe/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088671 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087325 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557764265800
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 2326528 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.960803032s of 10.004592896s, submitted: 14
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087841 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 2310144 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157cb6/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 2293760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089913 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 2285568 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157db0/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087327 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.876619339s of 11.963118553s, submitted: 28
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157c4c/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090125 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157ce1/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x1157d0c/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089131 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c46/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157b7f/0x1230000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.725197792s of 12.809599876s, submitted: 25
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 2228224 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.085176468s of 10.108474731s, submitted: 6
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092185 data_alloc: 218103808 data_used: 311296
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090523 data_alloc: 218103808 data_used: 311296
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 2179072 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 2260992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 15
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x115cc1b/0x1236000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059730530s of 10.469105721s, submitted: 159
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa435000/0x0/0x4ffc00000, data 0x115ccb6/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x115e719/0x123a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x115e7b4/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104005 data_alloc: 218103808 data_used: 319488
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.492309570s of 11.524030685s, submitted: 14
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x115e8c4/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111955 data_alloc: 218103808 data_used: 327680
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 2088960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x11605e0/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89096192 unmapped: 2080768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117181 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x1161ec8/0x1243000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.267296791s of 10.416739464s, submitted: 51
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 2048000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x1161e2d/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120383 data_alloc: 218103808 data_used: 344064
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 2031616 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 155 ms_handle_reset con 0x557763f08000 session 0x55776350d0e0
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 598016 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0x1165511/0x1249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 16
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124387 data_alloc: 218103808 data_used: 344064
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.574859619s of 10.815853119s, submitted: 264
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0x1167127/0x124c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 157 handle_osd_map epochs [158,159], i have 157, src has [1,159]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137003 data_alloc: 218103808 data_used: 352256
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139835 data_alloc: 218103808 data_used: 352256
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.203613281s of 10.384685516s, submitted: 64
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141625 data_alloc: 218103808 data_used: 352256
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c4dc/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 17
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153121 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91652096 unmapped: 1622016 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x116fc59/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.928189278s of 10.833756447s, submitted: 92
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146159 data_alloc: 218103808 data_used: 364544
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa416000/0x0/0x4ffc00000, data 0x116fa7c/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 1572864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.228721619s of 64.248054504s, submitted: 16
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 ms_handle_reset con 0x557764264c00 session 0x5577635ba1e0
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 1343488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 18
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x11715ba/0x125c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.423639297s of 11.447608948s, submitted: 183
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155219 data_alloc: 218103808 data_used: 380928
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.546059608s of 12.619788170s, submitted: 25
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 29 00:54:10 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158853 data_alloc: 218103808 data_used: 389120
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa40d000/0x0/0x4ffc00000, data 0x1174b68/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.282610893s of 14.913866997s, submitted: 51
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.777770996s of 21.886068344s, submitted: 15
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.618280411s of 16.621215820s, submitted: 1
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.889047623s of 25.900033951s, submitted: 3
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166389 data_alloc: 218103808 data_used: 409600
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 1228800 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92053504 unmapped: 1220608 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config show' '{prefix=config show}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 2285568 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 2351104 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 2342912 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 13156352 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 13156352 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 75.936843872s of 76.001602173s, submitted: 35
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 ms_handle_reset con 0x557765b96000 session 0x5577650065a0
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Got map version 19
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 9735 writes, 34K keys, 9735 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9735 writes, 2412 syncs, 4.04 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1751 writes, 3893 keys, 1751 commit groups, 1.0 writes per commit group, ingest: 1.60 MB, 0.00 MB/s#012Interval WAL: 1751 writes, 547 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 12754944 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.270599365s of 299.284576416s, submitted: 157
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92643328 unmapped: 12722176 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 12713984 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 00:54:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453185045' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 11739136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config show' '{prefix=config show}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 94003200 unmapped: 11362304 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 11771904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:11 np0005539482 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}'
Nov 29 00:54:11 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14891 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:54:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:54:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:54:11 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:54:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 00:54:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648564266' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 00:54:11 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14895 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 00:54:11 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:54:11 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 00:54:11 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003276651' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 00:54:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 00:54:12 np0005539482 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 00:54:12 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14901 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:54:12 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 00:54:12 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413715245' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 00:54:12 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 00:54:12 np0005539482 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14909 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 00:54:12 np0005539482 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:54:12.971+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:54:12 np0005539482 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604468007' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423410994' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942216470' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 00:54:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:54:13.769 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 00:54:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:54:13.770 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 00:54:13 np0005539482 ovn_metadata_agent[163968]: 2025-11-29 05:54:13.770 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 00:54:13 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326686157' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468367657' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857430637' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2165627524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2165627524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335514251' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 00:54:14 np0005539482 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719354350' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 00:54:14 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147154485' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 00:54:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 00:54:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066747595' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 00:54:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 00:54:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461734485' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 475136 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 385024 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 385024 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 303104 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc ms_handle_reset ms_handle_reset con 0x55909679fc00
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_configure stats_period=5
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 ms_handle_reset con 0x559097d03400 session 0x5590967283c0
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 ms_handle_reset con 0x5590971ab800 session 0x559097306780
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 925696 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 925696 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 7055 writes, 29K keys, 7055 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7055 writes, 1300 syncs, 5.43 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.909240723s of 600.174255371s, submitted: 90
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 1900544 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858718 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 1802240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.354232788s of 200.571792603s, submitted: 90
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 1802240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 1777664 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867538 data_alloc: 218103808 data_used: 233472
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 123 ms_handle_reset con 0x55909a3a6000 session 0x559099864960
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 1753088 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca39000/0x0/0x4ffc00000, data 0x12cfa4/0x1e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 18464768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 124 ms_handle_reset con 0x559097d2ac00 session 0x559099b63a40
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3a000/0x0/0x4ffc00000, data 0x112cfb3/0x11e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986920 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 18440192 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x112eb4c/0x11e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 18440192 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x112eb4c/0x11e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 18399232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.313594818s of 22.573022842s, submitted: 39
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 18251776 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Got map version 10
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 18194432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x113532e/0x11ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 18194432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 18104320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992562 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba23000/0x0/0x4ffc00000, data 0x1140c8a/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 15917056 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba23000/0x0/0x4ffc00000, data 0x1140c8a/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 16089088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994068 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Got map version 11
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.530145645s of 10.658089638s, submitted: 43
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x11533a0/0x120e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 15925248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 15818752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998324 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 15745024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 15745024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 15720448 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba03000/0x0/0x4ffc00000, data 0x115f605/0x121b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 15687680 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 14639104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997130 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 14524416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x11678e1/0x1222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.448275566s of 10.000021935s, submitted: 39
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 14401536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9f3000/0x0/0x4ffc00000, data 0x117085c/0x122b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 14376960 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 14352384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 14286848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000216 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 14286848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 14278656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 14254080 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9e7000/0x0/0x4ffc00000, data 0x117c65f/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 14196736 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 14196736 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998990 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 14098432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 14098432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.366621971s of 10.501939774s, submitted: 33
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9e4000/0x0/0x4ffc00000, data 0x1180231/0x123a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 14008320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 14008320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001710 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0x118a393/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0x118a393/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002326 data_alloc: 218103808 data_used: 249856
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 14147584 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 14139392 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9d1000/0x0/0x4ffc00000, data 0x1191054/0x124c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.712274551s of 13.102365494s, submitted: 45
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005064 data_alloc: 218103808 data_used: 258048
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 13959168 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9c4000/0x0/0x4ffc00000, data 0x119c6f7/0x125a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 13910016 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 13811712 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008394 data_alloc: 218103808 data_used: 258048
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9c0000/0x0/0x4ffc00000, data 0x11a077b/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 13778944 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 13778944 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9ba000/0x0/0x4ffc00000, data 0x11a636d/0x1264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 11427840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa801000/0x0/0x4ffc00000, data 0x11bb43e/0x127c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 11427840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Got map version 12
Nov 29 00:54:15 np0005539482 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 11370496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.686246872s of 10.000534058s, submitted: 55
Nov 29 00:54:15 np0005539482 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160944512' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014752 data_alloc: 218103808 data_used: 266240
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 10985472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 10928128 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7e2000/0x0/0x4ffc00000, data 0x11dabf8/0x129c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7e3000/0x0/0x4ffc00000, data 0x11dab4b/0x129b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020594 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e9505/0x12aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 10485760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 10485760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.237507820s of 10.010437965s, submitted: 53
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025372 data_alloc: 218103808 data_used: 270336
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 10502144 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 10592256 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7a7000/0x0/0x4ffc00000, data 0x121746f/0x12d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [0,0,2])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 10543104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa794000/0x0/0x4ffc00000, data 0x1227b64/0x12ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 10543104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa795000/0x0/0x4ffc00000, data 0x1227b32/0x12e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 10518528 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034972 data_alloc: 218103808 data_used: 278528
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 10461184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 10461184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa76f000/0x0/0x4ffc00000, data 0x124b864/0x130e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 9461760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9330688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 9256960 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042542 data_alloc: 218103808 data_used: 274432
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.984232903s of 10.380507469s, submitted: 144
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 9404416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89112576 unmapped: 8355840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa330000/0x0/0x4ffc00000, data 0x1279979/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 8323072 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89219072 unmapped: 8249344 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa317000/0x0/0x4ffc00000, data 0x12928da/0x1355000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 8241152 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044072 data_alloc: 218103808 data_used: 274432
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa316000/0x0/0x4ffc00000, data 0x12959b1/0x1358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 8306688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 8167424 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 8167424 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 8118272 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 8298496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053172 data_alloc: 218103808 data_used: 282624
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 8298496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.369832993s of 10.755517006s, submitted: 131
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2e4000/0x0/0x4ffc00000, data 0x12c3c14/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2d6000/0x0/0x4ffc00000, data 0x12d2acb/0x1398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 8159232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 8036352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 8036352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 7979008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058860 data_alloc: 218103808 data_used: 286720
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2bd000/0x0/0x4ffc00000, data 0x12ea68a/0x13b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89595904 unmapped: 7872512 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 7700480 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90816512 unmapped: 6651904 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 6586368 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fa7c6/0x13c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 7036928 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061434 data_alloc: 218103808 data_used: 294912
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90464256 unmapped: 7004160 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.862014771s of 10.027328491s, submitted: 53
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa274000/0x0/0x4ffc00000, data 0x13323fc/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa274000/0x0/0x4ffc00000, data 0x13323fc/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 6529024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069084 data_alloc: 218103808 data_used: 294912
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 6324224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 6324224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa23b000/0x0/0x4ffc00000, data 0x136a897/0x1433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90947584 unmapped: 6520832 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 6430720 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 6373376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072284 data_alloc: 218103808 data_used: 303104
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90783744 unmapped: 6684672 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa217000/0x0/0x4ffc00000, data 0x138e5f8/0x1457000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.346149445s of 10.593280792s, submitted: 69
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 6586368 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90456064 unmapped: 7012352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 6963200 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa207000/0x0/0x4ffc00000, data 0x139b281/0x1466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 6955008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075678 data_alloc: 218103808 data_used: 315392
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 6955008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 5840896 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 5832704 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1f3000/0x0/0x4ffc00000, data 0x13b0953/0x147b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 5832704 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076688 data_alloc: 218103808 data_used: 315392
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x13bd0c1/0x1486000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.637351036s of 10.712457657s, submitted: 33
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 5578752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 5578752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 5554176 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079056 data_alloc: 218103808 data_used: 315392
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1dd000/0x0/0x4ffc00000, data 0x13c7cd4/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92119040 unmapped: 5349376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92119040 unmapped: 5349376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 5341184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92233728 unmapped: 5234688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1b7000/0x0/0x4ffc00000, data 0x13ee245/0x14b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 5185536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081868 data_alloc: 218103808 data_used: 315392
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 5185536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa18c000/0x0/0x4ffc00000, data 0x1418d7a/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 4890624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.908507347s of 10.015370369s, submitted: 35
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 4890624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 4931584 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082444 data_alloc: 218103808 data_used: 315392
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa179000/0x0/0x4ffc00000, data 0x142b56b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 5332992 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa179000/0x0/0x4ffc00000, data 0x142b56b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa16e000/0x0/0x4ffc00000, data 0x143691e/0x1500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 5332992 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa16e000/0x0/0x4ffc00000, data 0x143691e/0x1500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 5169152 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085216 data_alloc: 218103808 data_used: 315392
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92438528 unmapped: 5029888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 5693440 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 5693440 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.764957428s of 11.050184250s, submitted: 23
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 ms_handle_reset con 0x559096ee4400 session 0x55909a371a40
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa153000/0x0/0x4ffc00000, data 0x1451ca7/0x151b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Got map version 13
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 5087232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083152 data_alloc: 218103808 data_used: 315392
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 5087232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 3866624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 3801088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 3801088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93904896 unmapped: 3563520 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa11d000/0x0/0x4ffc00000, data 0x1486bbb/0x1551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094026 data_alloc: 218103808 data_used: 323584
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94060544 unmapped: 3407872 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 3301376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2981888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0d7000/0x0/0x4ffc00000, data 0x14cad4a/0x1597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.159432411s of 10.446393013s, submitted: 280
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2981888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 2949120 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101764 data_alloc: 218103808 data_used: 327680
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0c2000/0x0/0x4ffc00000, data 0x14e0882/0x15ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 3235840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 3014656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 3014656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 2924544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94617600 unmapped: 2850816 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102396 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0a0000/0x0/0x4ffc00000, data 0x1500acb/0x15ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.922378540s of 11.010833740s, submitted: 41
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107168 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa078000/0x0/0x4ffc00000, data 0x1528089/0x15f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa078000/0x0/0x4ffc00000, data 0x1528089/0x15f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2138112 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105844 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa04b000/0x0/0x4ffc00000, data 0x1554fba/0x1623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 1671168 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.918478012s of 10.029466629s, submitted: 25
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 1703936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110248 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 1703936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1560189/0x162e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95830016 unmapped: 1638400 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111578 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 1523712 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa013000/0x0/0x4ffc00000, data 0x158d904/0x165b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97034240 unmapped: 1482752 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0x158d933/0x165a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 2342912 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 2342912 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.550374985s of 10.174007416s, submitted: 36
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2375680 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121358 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15c4606/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122254 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fa4000/0x0/0x4ffc00000, data 0x15fb1f5/0x16ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fa4000/0x0/0x4ffc00000, data 0x15fb25a/0x16ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 2867200 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121988 data_alloc: 218103808 data_used: 335872
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927287102s of 10.717306137s, submitted: 67
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 2473984 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9f27000/0x0/0x4ffc00000, data 0x167852e/0x1746000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97140736 unmapped: 2424832 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97394688 unmapped: 2170880 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98451456 unmapped: 1114112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98451456 unmapped: 1114112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140216 data_alloc: 218103808 data_used: 344064
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 876544 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 1556480 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9ee6000/0x0/0x4ffc00000, data 0x16b7576/0x1788000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 1556480 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98156544 unmapped: 1409024 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98213888 unmapped: 1351680 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142228 data_alloc: 218103808 data_used: 352256
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.739322662s of 10.120580673s, submitted: 129
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98320384 unmapped: 2293760 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e71000/0x0/0x4ffc00000, data 0x172a418/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e63000/0x0/0x4ffc00000, data 0x1738cfd/0x180b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x173b74a/0x180e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148780 data_alloc: 218103808 data_used: 352256
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1925120 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1925120 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 1703936 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 2752512 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e04000/0x0/0x4ffc00000, data 0x17943da/0x1868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 2686976 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160660 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99180544 unmapped: 2482176 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.834702492s of 11.411822319s, submitted: 75
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 2375680 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 2375680 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9dc9000/0x0/0x4ffc00000, data 0x17d0f56/0x18a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99475456 unmapped: 2187264 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 2056192 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156908 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1925120 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d7b000/0x0/0x4ffc00000, data 0x181d32b/0x18f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 1728512 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 1728512 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d69000/0x0/0x4ffc00000, data 0x1831591/0x1905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 1720320 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d2f000/0x0/0x4ffc00000, data 0x186b11c/0x193f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168314 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d2f000/0x0/0x4ffc00000, data 0x186b11c/0x193f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.561640739s of 10.917224884s, submitted: 76
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1925120 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 1957888 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 1908736 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177168 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cd1000/0x0/0x4ffc00000, data 0x18c9181/0x199d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 1589248 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101900288 unmapped: 1859584 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101957632 unmapped: 1802240 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c83000/0x0/0x4ffc00000, data 0x19185d2/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 1474560 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101187584 unmapped: 3620864 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182486 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 3563520 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c20000/0x0/0x4ffc00000, data 0x197b760/0x1a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 3129344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 3129344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.754183769s of 10.750842094s, submitted: 94
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 2072576 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9bf1000/0x0/0x4ffc00000, data 0x19aad7e/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 1703936 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189048 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 1622016 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 1597440 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b80000/0x0/0x4ffc00000, data 0x1a1a472/0x1aee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194272 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b48000/0x0/0x4ffc00000, data 0x1a52e2b/0x1b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103055360 unmapped: 2801664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.477082253s of 10.629286766s, submitted: 73
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 1712128 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200826 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9af5000/0x0/0x4ffc00000, data 0x1aa5978/0x1b79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104153088 unmapped: 1703936 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 1359872 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9af6000/0x0/0x4ffc00000, data 0x1aa59e0/0x1b78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 1359872 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 2793472 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103383040 unmapped: 3522560 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202774 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 3514368 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9a8d000/0x0/0x4ffc00000, data 0x1b0df35/0x1be1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 3506176 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104857600 unmapped: 2048000 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 1949696 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.102632523s of 10.059342384s, submitted: 95
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105545728 unmapped: 2408448 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223360 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 2351104 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 2342912 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f95b0000/0x0/0x4ffc00000, data 0x1bdaf46/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 2326528 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105889792 unmapped: 2064384 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 2039808 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224156 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c1dff5/0x1cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 2039808 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c1dff5/0x1cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 1802240 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 1785856 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1703936 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922871590s of 10.191562653s, submitted: 86
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9532000/0x0/0x4ffc00000, data 0x1c57e35/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 1531904 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219940 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 1531904 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9532000/0x0/0x4ffc00000, data 0x1c57e35/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 1433600 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 1327104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94fe000/0x0/0x4ffc00000, data 0x1c8d4d9/0x1d60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 1327104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 2359296 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226856 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94e8000/0x0/0x4ffc00000, data 0x1ca3492/0x1d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94c4000/0x0/0x4ffc00000, data 0x1cc71b5/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 548864 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.210618019s of 10.000334740s, submitted: 57
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94c4000/0x0/0x4ffc00000, data 0x1cc71b5/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f82f7000/0x0/0x4ffc00000, data 0x1cf41ea/0x1dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 466944 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237272 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 450560 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 1146880 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 108994560 unmapped: 2105344 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7107000/0x0/0x4ffc00000, data 0x1d42634/0x1e17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 786432 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 573440 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240492 data_alloc: 218103808 data_used: 360448
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.811102867s of 10.000490189s, submitted: 66
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237784 data_alloc: 218103808 data_used: 368640
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236390 data_alloc: 218103808 data_used: 368640
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f5000/0x0/0x4ffc00000, data 0x1d55a5c/0x1e29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.261400223s of 10.000162125s, submitted: 21
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f70f0000/0x0/0x4ffc00000, data 0x1d5755a/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241964 data_alloc: 218103808 data_used: 376832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f70f0000/0x0/0x4ffc00000, data 0x1d5755a/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244584 data_alloc: 218103808 data_used: 385024
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ee000/0x0/0x4ffc00000, data 0x1d5929e/0x1e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ee000/0x0/0x4ffc00000, data 0x1d5929e/0x1e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ed000/0x0/0x4ffc00000, data 0x1d59339/0x1e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 1605632 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.875500679s of 10.000182152s, submitted: 35
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 1597440 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247528 data_alloc: 218103808 data_used: 385024
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 1597440 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ec000/0x0/0x4ffc00000, data 0x1d59466/0x1e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251238 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x1d5afcb/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x1d5afcb/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.932135582s of 10.000720978s, submitted: 29
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250570 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Got map version 14
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x1d5b05f/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2999 syncs, 3.64 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3872 writes, 13K keys, 3872 commit groups, 1.0 writes per commit group, ingest: 20.11 MB, 0.03 MB/s#012Interval WAL: 3872 writes, 1699 syncs, 2.28 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x1d5b196/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255762 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e6000/0x0/0x4ffc00000, data 0x1d5b3c2/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e6000/0x0/0x4ffc00000, data 0x1d5b3c2/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5b552/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.908580780s of 10.000893593s, submitted: 23
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x5590971abc00 session 0x559096728f00
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 1540096 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e9000/0x0/0x4ffc00000, data 0x1d5b5ba/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254688 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x55909a3ba400 session 0x5590999f4000
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 1531904 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x55909a3b9000 session 0x559099b630e0
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 1531904 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255686 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5b6ec/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 2564096 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 2564096 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b817/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.950098038s of 10.003911972s, submitted: 16
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b817/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258052 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b9bb/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2547712 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2547712 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257758 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.925294876s of 10.001269341s, submitted: 25
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bab8/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256700 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bab8/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e9000/0x0/0x4ffc00000, data 0x1d5bb3e/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256876 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bb82/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.725295067s of 10.793285370s, submitted: 20
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258948 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bcd1/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bcd1/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 2498560 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264654 data_alloc: 218103808 data_used: 393216
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 2449408 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 2211840 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 2211840 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 770048 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7085000/0x0/0x4ffc00000, data 0x1dba787/0x1e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.767315865s of 10.010424614s, submitted: 67
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 688128 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f706e000/0x0/0x4ffc00000, data 0x1dd359e/0x1eb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280744 data_alloc: 218103808 data_used: 397312
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 671744 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f701b000/0x0/0x4ffc00000, data 0x1e24c16/0x1f00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 122880 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 2113536 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6fdd000/0x0/0x4ffc00000, data 0x1e671fe/0x1f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 958464 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6fad000/0x0/0x4ffc00000, data 0x1e976c1/0x1f71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278656 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 1515520 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 1515520 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x1ecb3cc/0x1fa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.705419540s of 10.009120941s, submitted: 105
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 1507328 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285022 data_alloc: 218103808 data_used: 401408
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 1507328 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 2146304 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x1ecba62/0x1fa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 2072576 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6f40000/0x0/0x4ffc00000, data 0x1f00b90/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [1])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Got map version 15
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 2244608 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 2154496 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296348 data_alloc: 218103808 data_used: 409600
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6ef6000/0x0/0x4ffc00000, data 0x1f4b216/0x2028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 1032192 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 802816 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300896 data_alloc: 218103808 data_used: 409600
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6ebf000/0x0/0x4ffc00000, data 0x1f817d1/0x205f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.160684586s of 10.642349243s, submitted: 176
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 638976 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6eac000/0x0/0x4ffc00000, data 0x1f9425f/0x2072000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 00:54:15 np0005539482 ceph-osd[90181]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
